M4dH4TT3rXL i have read the manual thanks, i have searched google for hours and ive been trying in #wordpress to no avail but thanks for stating the obvious
XLM4dH4TT3r: WordPress is PHP. It uses IP addresses or DNS names. It doesn not know (or care) for NICs
XLM4dH4TT3r: in any case: your question is much offtopic here
M4dH4TT3ryeah i hear that a lot on freenode because channels only want to talk about on topic but regardless of if its ontopic its usually off topic to them
sallebabilen: Check the binlog at the Master to see what kind of event is that and think if it is possible to skip it. You might need to resync some tables
ElishaCan anybody help me optimize a query, out of sudden it stopped being fast
jkavalik!t Elisha use explain
ubiquity_botElisha: http://dev.mysql.com/doc/refman/5.6/en/execution-plan-information.html and http://dev.mysql.com/doc/refman/5.6/en/explain.html
Elishajkavalik I did, but indexes are not used (even though present)
jkavalikElisha, good start, then pastebin or gist the query, explain results, structure of the table(s) and if you know, then what changed and what plan was it using before
thumbs!t Elisha idfk
ubiquity_botElisha: Please paste your query, the EXPLAIN select.., and the relevant SHOW CREATE TABLE/s in one pastebin with the sql formatted so it is readable
jkavalikElisha, "DISTINCT(node.nid) AS nid_1" looks like it does something else than you think it does - "DISTINCT" takes effect for the entire select list, not the "node.id" only
NaktibaldaElisha: why are you using subquery?
Naktibaldait is wrapped by SELECT COUNT, so you should rewrite it as a single query
ElishaUnfortunatelly I am not the author of the query
thumbstime to fix it, then.
Elishanot going to happen
Elishathe weird thing is that on the other machine this works very fast
jkavalikcan you compare the explains from those two machines, this one does not look unreasonable
NaktibaldaEXPLAIN it on the other machine
Elishait has slight difference
NaktibaldaElisha: how long does that query take?
ElishaUnknown like forever
Elishait is full :)
Elishaand stacking up
Naktibaldawhat's the status of that query?
ElishaUsing index condition...
Elishacopy to tmp
Naktibaldawhat's the longest time of stuck query?
jkavalikjust "show processlist;" without full will be better for finding times
ElishaI use information_schema.processlist with max(time)
thumbsElisha: it's a pretty horrible query. Perhaps 5.7 will run it better
ElishaMariaDB is certainly better 10.x
Naktibaldait isn't that bad
jkavalikbut still processing 1k rows should not take that long
jkavalikmaybe some statistics hiccup made few instances use a bad plan and they slowed everything down too much?
Elishafriend says after 5years that it doesn't work
Elishaor how to refresh statistics
Elishaoptimize didn't help
jkavalikoptimize rarely helps, analyze table is for stats (even when it should not be needed for innodb, but sometimes the optimizer just gets stuck and needs a proper kick)
Elishamaybe this could work
password2anyone want to tell me wtf i was thinking when i made this qeury ? :D
jkavalikpassword2, (what) were you thinking?!! ;)
password2... AND ( week(loggedtimestamp,1) > (week(curdate(),1) - 20) )
password2that condition fails when you start considering years :D
password2now i need a better way to show the last x-days
Xgcpassword2: datediff or something similar.
archivistand between
jkavalik"timestamp > (curdate() - interval 20 day)" ?
password2i just arrived at the same line
Naktibaldait is -20 week
password2well actually i used 2 month instead
password2i should actually zero pad the missing dates somehow
password2but my mysql fu is not that good
Xgcpassword2: What missing dates?
password2idk how to explain exatcly, basically theres not rows for every day , so when using the group function it goes from ,say, 2015-01-01 to 2015-01-10
Naktibaldayou need a table containing all dates
Xgcpassword2: There are several ways to resolve that. One is to join with a table of dates that contains all the dates you care about. That can be a static / base table or a derived table.
password2but the dates i care about is the last x amount of dates
Xgcpassword2: So do you care about anything more than 100 years ago?
Xgcpassword2: and 100 years from now?
Xgcpassword2: Create a table containing all those dates.
password2that feels really hacky
Xgcpassword2: Now you can use between logic (or whatever you wish) to grab a list of dates between 2 dates. Outer join that with your other data. Bingo, you have a list of all dates and non-null other data that represents your exist data.
Xgcpassword2: Again, you can do that dynamically with a derived table. Some databases have a way to generate a table in more complex ways.
Xgcpassword2: Feel free to do it any other way.
Xgcpassword2: An alternative is to let your report layer fill in missing report rows. Let the SQL return only existing data.
password2basically I am too lazy atm , maybe after i fix all the other bugs i will come back to this
XgcEach method has issues of one kind or another.
Xgcpassword2: You can even do this with a derived list of sequential integers.
XgcForget specific dates. You can then derive the dates as you wish based on those integers. All these methods are pretty simple.
password2only a few years more and i can add the 'per year' graph :D
password2atm thers only one years worth of data
ruben23hi there, anyone have encountered this error somehow ---> MySQL ERROR 2026 (HY000): SSL connection error
babilenI see "Got fatal error 1236 from master when reading data from binary log: 'log event entry exceeded max_allowed_packet ..." after running "CHANGE MASTER TO MASTER_LOG_POS ..." to get past "ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 1895825408 .." in the binlog.
babilenUnfortunately the query seems to still be way too large (no idea what they are doing there) at 1.9G (assuming data_len: 1895825408 gives an indication of that).
babilenHow would I find out which query that is and take a more detailed look?
snoyesbabilen: http://thenoyes.com/littlenoise/?p=338
snoyesin short, you're at the wrong position.
babilensnoyes: Yeah, that is exactly what I read and what got me to run "CHANGE MASTER TO MASTER_LOG_POS ..." with the log_pos from Exec_Master_Log_Pos in slave status.
babilenNot sure which position I should use instead
snoyesdid you also provide the file name?
snoyesIt's not documented what happens if you provide just the position and not the name. I wonder what that does...
babilenWell .. no idea
babilenI did not specify the file as, well, your article said "give it the Exec_Master_Log_Pos from SHOW SLAVE STATUS" :)
snoyesI'd hope it keeps the same one it's working on.
snoyesanyhoo, it seems the position you specified was wrong. On the master, mysqlbinlog path/to/binlog/file | grep -e "^# at "
snoyeswill give you the list of legal positions
babilenHow do I figure out which one I need?
babilenThe file pointed to by "Master_Log_File: mysql-bin.001443" does not exist on the master
snoyesMaster_Log_File is the one for the IO thread. You want Relay_Master_Log_File, since that's for the SQL thread and goes with the Exec_Master_Log_Pos.
babilenRight, how do I figure out the right position?
babilen(thank you!)
snoyeswhich file is the Relay_Master?
snoyesbut you said mysql-bin.001443 doesn't exist on the master
babilenmysqld-relay-bin.000003 does though
snoyesthat sounds like you're looking on the slave
babilenI am looking on the slave (that is "SLAVE STATUS" output)
babilen"SHOW SLAVE STATUS" that is
snoyesso mysql-bin.001443 should exist on the master, and mysqld-relay-bin.000003 should exist on the slave.
babilenThe command was "slave stop; change master to master_log_pos=51908895; slave start;"
babilenmysqld-relay-bin.000003 exists on the slave, but mysql-bin.001443 does not exist on the master
snoyesmysql-bin.001443 has to exist on the master, or you'd get an error about the file not found, not about a log event entry being too large
babilenWell, it doesn't
babilensnoyes: Unfortunately I have to leave now. You have been most helpful. I will continue to debug this further and will create a backup of our master using innobackupex in the interim. That should allow us to setup a new slave if I don't manage to repair this one.
babilenIf there's anything you want me to read, please keep it coming. Simply looking for information.
jervImagine the scenario where you have a schema {A,B}. B only has a couple of different values. I want to be able to quickly search for all rows based on a value of B. Should I put an index on B or does it only work if B has a bunch of different values?
jervs/work/make sense
passagejerv: the term you're looking for is cardinality
snoyesjerv: at one time we said the rule of thumb was that if your query needed to examine more than 20% of the rows, it was faster to do a table scan than to do an index lookup.
snoyesbabilen: check if binary logs are stored in some directory other than where you are looking. They don't have to be in the data directory.
snoyesjerv: I imagine that's still roughly true, although exactly how the optimizer works has changed a bit, so it may not be right at 20% anymore.
jervsnoyes: thanks, that's what I was looking for. It will be significantly more than 20% probably so the answer will be no.
tauruswag#/join #ubuntu-cn
shouvikdoes a select and insert query, does the row I am selecting from also get locked?
baloneyquestion, i have a column (varchar(45)) which stores an ip_mask that takes wildcards (192.14*.146.*); what would be the best way to check an ip agains that column?
shouvikhad a situation where mysql ran into a deadlock
baloneyshouvik: inserts it does
baloneyduring insert, it will be locked
Naktibaldabaloney: seriously? 14* ?
shouvikInsert locks the table it's writing into I am aware
Naktibaldawho uses decimal masks for IP numbers
baloneyNaktibalda: just an example
shouviknot sure if the table it's reading from gets locked too
Naktibaldabaloney: change * to %, use LIKE
baloneygotcha, so if the user sets his ip mask at app level to 192.14*.146.* - i'll just replace * to %, makes sense
NaktibaldaWHERE '' LIKE ip_mask
snoyesshouvik: depends. If you use SELECT...FOR UPDATE, then yes.
baloneyperfect, thanks Naktibalda
shouviksnoyes no was updating to a new temp table, the insert bit
shouvikbasically the scene was I had a select query based on 7 joins from my user reg table
snoyesshouvik: so you're doing an INSERT...SELECT query?
snoyesor you did a SELECT, and then a separate INSERT?
shouvikand then I was doing an insert into a temp table
shouvikINSERT INTO SELECT FROM type query
shouvikokay it lock
samsshDoes the cli cache statements?
Naktibaldado you mean command history?
samsshOh, yes.
samsshYes, no
samsshI'm sorry, I was trying to respond to myself.
samsshI'm assuming by this: http://stackoverflow.com/questions/4561292/how-to-clear-query-cache-in-mysql that my answer is yes.
NaktibaldaQuery cache and query history are different things.
thumbssamssh: why is your query cache even enabled to begin with?
jwhquery cache is for pros
jwhor so I heard
samsshNot the cli history. I'd keep that.
Naktibaldaquery cache has nothing to do with client
samsshAlright, let me back up so I'm not XYing.
samsshI'm testing a query which seems too slow on a remote server.
thumbssamssh: SELECT SQL_NO_CACHE col1, col2 FROM tbl ...
samsshAfter the first run, the reported time is 0s.
thumbssamssh: you can skip the query cache with my statement.
samsshthumbs, I'm trying it, but the results are much less consistent. It queried twice in 1/10 and 1/3 the normal time, then back to 0s for the next two runs.
thumbs!t samssh idfk
ubiquity_botsamssh: Please paste your query, the EXPLAIN select.., and the relevant SHOW CREATE TABLE/s in one pastebin with the sql formatted so it is readable
samsshAllow me a minute to get sqlfiddle setup.
SilentNinja1on MySQl 5.6 what's the max record size in a query before the temp table in memory will use disk?
salleSilentNinja1: Split your question into parts and ask again :)
SilentNinja1OK thanks
snoyesSilentNinja1: 8 years ago the answer was https://www.percona.com/blog/2007/01/19/tmp_table_size-and-max_heap_table_size/
SilentNinja1OK thanks
SilentNinja1so answer still same :)? thanks
SilentNinja1so my tmp_table_size & max_heap_table_size are set to 16777216
samsshI can't reproduce my problem in sqlfiddle in a reasonable length of time. :p I added indexes to two FK fields which looked like they might have been the problem, and it appears to have improved the query time drastically in the actual application. MySQLWorkbench times didn't improve much, but I guess that doesn't really matter.
SilentNinja1so my tmp_table_size & max_heap_table_size are set to 16777216, does that mean each can have 16MB in memory before hit disk?
SilentNinja1each record I mean
SilentNinja1nevermind that's total table size
SilentNinja1query size
aradapilottable size
aradapilotnot query or record
aradapilotsingle table size
scmpMySQL seems to ignore case on a group by? Is this intended or an issue with utf8mb4? http://hastebin.com/oqokubapoj.mel
Naktibaldascmp: case sensitivity depends on collation, your table uses default collation which is case insensitive
scmpwould utf8mb4_bin be a better choice for case-sensitive behavior?
scmpNaktibalda: thank you
ligACTION yeahs for losing communication channels
thumbsACTION dances with lig 
ligACTION weeeeeees around the room with thumbs 
blinky_hi all, if I have a value going into a table that can be either a text string or a number do I just use a varchar? I am going to say yeah but just wanted to check
snoyesblinky_: is it really a number? Can you do math with it? Or is it just a string that happens to be numeric digits?
blinky_it would be a number that would be compared against another, but I can convert the string to a int in php if needed
snoyesif it's compared against another like one phone number to another, then they are really just strings.
snoyesIf you want to be able to say that 99 < 100, then those are numbers.
epaphusWhat is better to run a heavy mysql db... sata3 or ssd ?
sporkmore iops is better
aradapilotsata's not great. a sas array with a hardware controller (bbu+cache) performs just as well as an ssd build for most setups at a much lower cost with better durability. when you need it, though, ssd does make a big difference
aradapilotbut buy what you need. getting a $20k server that you won't use is wasteful
aradapilotif the choice is just between sata and ssd, go ssd, sata's for desktops and webservers.
sporkactually, given the right backend, software raid can out perform most HW solutions
spork(mdraid specifically)
aradapilotthere are cases where it can, but those are very rare
sporkno they aren't
sporkif the array is configured correctly, and the system has enough backplane bandwidth no problem
sporkin fact, mdraid, allows for many other `nice' features to optimize for the disk you're using
spork(gpt aligned partitions, customized chunk sizes, hints to the backing FS, etc)
spork*alignment hints*
sporkin any case, I agree with you about sata, slow and only good for bulk storage or cheapo systems
sporkour best performing solutions have been mdraid mirror sets of PCIe flash solutions
sporkafter that, extremely wide sas 15k array set (20 disks, raid 10)
sporkbut in most cases, I've found that mysql / mariadb gets bottlenecked pretty fast within CPU land
aradapilotit's a different animal with flash, you understand
aradapilotaye, a good chunk of workloads get cpu bound (offloading raid onto cpu isn't great there, though usually it's no a big %)
aradapilotwhen you have to buy 15,000 of them, you might go with a more cost effective build that can do the same thing
sporkfrom my experience if you're running a multicore system, affinity keeps things in check
sporkthe other major downside to HW raid, is you're stuck with that vendor if things go sideways
sporkIt's also a lot more expensive to impliment things like T10
aradapilotwell, we've got hundreds of spare controllers, but they also basically never break. don't skimp on them, though, it can be quite bad if they do
aradapilotthe main reason mdraid doesn't work on spinning disks (flash is a different story) is that using writethrough cache is pretty damn terrible. controller writeback is a huge boon.
aradapilotif you don't care about durability, or set up a hardware mirror, well, there ya go; but that's generally impractical
aradapilotit's something you do in labs for fun, because it's definitely fun, but not really for prod
sporkyou can avoid cache slowness by going zero copy directly to disk
aradapilotthat has the same problems as writethrough
chilaquilhey guys, Is there any tool that can compare a 2 mysql databases in two different hosts and then it could only output the difference in the schema structure, no data?
chilaquiland preferably create a script of all the missing tables, views, functions, etc?
raymondchilaquil: Not one that is well known at least.
aradapilotmight be something in percona toolkit i am not thinking of, but that's not really a common task, so not very likely
aradapilotdump info schema and diff?
raymondaradapilot: Yea, I just looked at that, nothing.
snoyesthere's a mysql utility for that
raymondsnoyes: Oh, what's it called?
snoyeschilaquil: http://dev.mysql.com/doc/mysql-utilities/1.5/en/mysqldiff.html
aradapilotsnoyes to the rescue
aradapilotha, neat
aradapilotlearn something new every day
aradapilotchilaquil: ^
aradapilotoh you did ping him
snoyesuse --difftype=sql to get the script you wanted
aradapilotACTION is blind
chilaquilsnoyes: that's exactly the one I'm looking at right now, but I'm reading it has some problems
chilaquilsnoyes: https://www.percona.com/blog/2015/04/15/checking-table-definition-consistency-mysqldiff/
chilaquilsnoyes: I will take a look anyway, thanks snoyes !
snoyesso looks like there's a bug where if the column definition has changed, then it thinks it needs to recreate the index too
snoyesNot really a major concern for the first example - it's going to recreate the table anyway, so there's really no extra work.
snoyesthe second one would be an issue.
dascoderhey guys, thanks for the help yesterday
dascoderI was able to import the 9 gb csv by splitting it into separate csv, using sed and removing bad characters
chilaquilsnoyes: well, but for dropping and re creating the primary key, do you think there will be any issue with it?
SilentNinja1hi how do can I find out which queries or tables are incrementing the create_tmp_tables ?
johnnyfiveHowdy. I just upgraded MySQL, and now i'm in a catch-22. When trying to start MySQL, I receive an error that the mysql.user table is damaged and I need to run mysql_upgrade. However mysql_upgrade requires the mysql server to be running. Anyone run into this before?
chilaquilsnoyes: is not needed like the guy says, but do you think that could create further problems?
SilentNinja1show processlist doesn't seem to how anyting useful, thanks.
snoyeschilaquil: dropping and recreating the key will take time that doesn't need to be spent. Not sure if it will correctly handle issues with foreign keys or with auto_increment fields.
chilaquilsnoyes: how much time? like 0.0001 seconds?
snoyesno, could be a very long time, like close to the time it takes to copy the whole table.
chilaquilsnoyes: I don't understand why it would take that long
aradapilotbuilding a large index can take a long time
snoyesto drop and recreate the primary key?
aradapilotor if primary key, building the whole table
aradapilotif you do change the primary key make sure the drop and add are in the same ALTER statement, nonatomic work on that is really bad juju
aradapilotother keys, not as big a deal
chilaquilyeah my first problem: ERROR 1091 (42000): Can't DROP 'PRIMARY'; check that column/key exists
sallechilaquil: SHOW CREATE TABLE will help you
chilaquilsalle: that was generated by the mysqldiff script
sallechilaquil: SHOW CREATE TABLE will help you to see why you get this error
chilaquilhttps://www.percona.com/blog/2015/04/15/checking-table-definition-consistency-mysqldiff/ How this guy was able to drop the primary key like that?
aradapilotchilaquil: he's adding a new one in the same ALTER
aradapilotjust changing i
aradapilotchanging a pk is rare, it's generally only done if thre was a mistake creating the table
amuzeUm, I have a table of users, now, I want each user to have an ID, so I made the first column 'id' MEDIUMINT PRIMARY KEY NOT NULL UNIQUE AUTO_INCREMENT
amuzeBut, then I realized that the ids change when a user is deleted or something, what should I use instead?
amuzeOr rather, should I just keep users and put a 'suspended' column?
snoyesamuze: if the user is deleted, so what?
amuzeACTION shrugs
amuzeI don't know, um, this is the first website I make, so don't expect a lot
chilaquilaradapilot: well, that's how the mysqldiff is generating the script
amuzeI am still considered new in the web world, coming from application programming
snoyesamuze: I'm just asking what it was that you realized
chilaquilaradapilot: I'm also adding a new on in the same alter
amuzeI was looking up somewhere in the internets and I found that the IDs are suspectible to change if a row is deleted or something, sort of a re-index
amuze(When using AUTO_INCREMENT)
chilaquilsalle: this is my show create table http://pastie.org/private/fgjzh8b83dwqewacnxbygq
amuzesnoyes, I was referring to you here ^
chilaquilchilaquil: I still don't see why I'm getting that error.
salleamuze: Let's talk about Primary Keys not IDs :)
salleamuze: PRIMARY KEY values don't change
amuzeACTION facepalms for being stupid
salleamuze: When user is deleted from *this* table you don't want to reuse his primary key for another user
amuzeYeah that's what I want
amuzeI thought they change
amuzesalle, even if it's AUTO_INCREMENT?
salleamuze: You can change them if you wish
salleamuze: What exactly do you mean by "change" by the way?
amuzeBy change I mean that other users use the deleted user's id
amuzeI just don't want that to happen
amuzeI just want the user to have a static id for the lifetime of the account
snoyesamuze: the existing values are not going to change unless you manually go and make it happen
amuzeah, okay
amuzeGot it
snoyesif you're using auto_increment, it's not going to reuse previously deleted values, unless there's some obscure bug or again you manually make it happen
litheum(unless you delete a user with the highest id, and then restart innodb, in which case their user id might be re-used)
thumbsyes, restarting with empty InnoDB tables is annoying like that
litheum"empty InnoDB tables"?
sallelitheum: I'd call it obscure behaviour :) Not a bug, because it is documented, but easy to miss
cmullicanWe have a couple of events that call SPs that call other SPs that call other SPs, perhaps another layer depending on the stage of the process. Sometimes, something in the chain gets a lock wait timeout and dies -- but not every time. The error is reported with the name of the event that launches the whole chain of processes. Is there a way to find out which query or process actually had the problem? (5.7.12)
thumbslitheum: InnoDB table with no rows because of a truncate, then mysqld is restarted, and the new a_i value is now 1
thumbslitheum: unless I'm confused about all this
litheumthumbs: if you did truncate, you don't even need to restart the server to get that behavior. truncate re-creates the table, so you start from 1 again anyway.
chilaquilHere's my problem: http://pastie.org/private/qqadys4a0m3v3uhdxacmg
thumbslitheum: yes, I messed that up. I was thinking of something else.
litheumchilaquil: you have DROP PRIMARY KEY in there twice
chilaquillitheum: which I'm doing
litheumchilaquil: what?
chilaquilI'm droping the primary key twice
litheumchilaquil: why do you think that makes any sense?
sallechilaquil: Why?
litheumchilaquil: how on earth can you drop the primary key and then drop it again when it obviously will not exist, having already been dropped?
sallechilaquil: And why do you drop the primary key and then create the *same* primary key again?
chilaquilbecause that's how the script mysqldiff generated it
chilaquilsalle: so if I have 2 primary keys when I do drop primary key will drop those two?
litheumchilaquil: you can't have two primary keys!
sallechilaquil: You don't have 2 primary keys you can not have more than one
aradapilotprimary key is what defines the row
sallechilaquil: You have compound primary key
chilaquilPRIMARY KEY (`tenant_id`,`action_id`),
sallechilaquil: If mysqldiff creates that ALTER TABLE then it is seriously broken
chilaquilsalle: that's what I was saying before
sallechilaquil: PRIMARY KEY (`tenant_id`,`action_id`) is compound primary key defined over two columns
sallechilaquil: No you said you have 2 primary keys which is impossible
chilaquilsalle: I was referring to mysqldiff
chilaquilsalle: but the problem is not the second time I drop the primary key
chilaquilthe problem is running the first time trying to drop the primary key
sallechilaquil: What kind of problem you are trying to solve?
litheumchilaquil: the error you see in the output is absolutely because that single ALTER TABLE tries to do DROP PRIMARY KEY two times.
litheummysql 5.6.26-log (root) [test]> create table t1 (id int unsigned not null auto_increment primary key);
litheummysql 5.6.26-log (root) [test]> alter table t1 drop primary key, drop primary key;
litheumERROR 1091 (42000): Can't DROP 'PRIMARY'; check that column/key exists
chilaquillitheum: but why If I execute drop primary key one time in mysql I get an error as well then?
litheumchilaquil: what are you talking about
litheumchilaquil: you have not shown that behavior yet, have you?
johnnyfiveHowdy. I just upgraded MySQL, and now i'm in a catch-22. When trying to start MySQL, I receive an error that the mysql.user table is damaged and I need to run mysql_upgrade. However mysql_upgrade requires the mysql server to be running. Anyone run into this before? Or have any advice?
sallechilaquil: Not to mention the entire ALTER does not make any sense because it drop all the keys and then creates exactly the same keys again
chilaquilnot I haven't
chilaquillitheum: this is what I ran inside of mysql: alter table action_alias drop primary key;
chilaquiland I got the same error
chilaquilsalle: yes, we already discussed that with snoyes
litheumSHOW CREATE TABLE `action_alias`\G
chilaquilsalle: but that's how mysqldiff works
litheumit sounds like mysqldiff is damaged and should not be used
sallechilaquil: If this is how mysqldiff works it is usless
chilaquilwell is broken in that part, but Is doing what I want it to do to comapare two schemas,
chilaquilso I just have to figure out how to fix it
chilaquilsalle and litheum checkout what the guy says about the drop primary key https://www.percona.com/blog/2015/04/15/checking-table-definition-consistency-mysqldiff/
chilaquillook for On “employees” table it drops and recreate the PRIMARY KEY, again something that is not needed a all.
litheumchilaquil: so it's confirmed that the tool is broken
chilaquilit is
sallechilaquil: Then don't use it
chilaquilbut there's no other tool that does that at the moment
chilaquilI have to fix that part
chilaquilwell coming back to the problem, why I cant run alter table action_alias drop primary key; ?
chilaquilI mean that also gives me an error
litheumSHOW CREATE TABLE `action_alias`\G
chilaquillitheum: you saw my pastie.org correct?
litheumchilaquil: yes, but you didn't execute alter table action_alias drop primary key; in that pastie
litheumchilaquil: show the table structure and the result of your alter table statement in the same pastie
chilaquilwhy are you saying to execute the show create table .... if it's in there?
litheumchilaquil: because i don't trust you.
chilaquillitheum: see my pastie again
litheumchilaquil: you can't just simply do both things and put them both in one damn pastie?
chilaquillitheum: again, check my pastie, is in there
litheumchilaquil: you *do not* have the result of alter table action_alias drop primary key; in that pastie
litheumchilaquil: here is what i'd like to see: http://pastie.org/10804062
chilaquillitheum: Ok I will paste it but it's the same error message
sallechilaquil: I bet a beer you will not get the same error message if SHOW CREATE shows PRIMARY KEY :)
chilaquillitheum: http://pastie.org/private/l8uxnqrlqo4veq7dvaurwq
litheumchilaquil: thanks!
litheumchilaquil: see how that error is *not even close* to being "the same error message" as the one in your first pastie?
litheumchilaquil: the first pastie: ERROR 1091 (42000) at line 292: Can't DROP 'PRIMARY'; check that column/key exists
litheumchilaquil: the second pastie: ERROR 1025 (HY000): Error on rename of './database/#sql-18b8_7e08' to './database/action_alias' (errno: 150)
chilaquillitheum: fk hold on
chilaquilsorry I think I pasted it from my second database
litheumchilaquil: look at how much time i had to spend begging you to simply put the thing into a pastie because you're being stubborn for no reason. it's just bizarre.
chilaquillitheum: I apologize for that, please give me a moment
amuzeI don't get the syntax error here
papi83dmmy mysql server is running at max capacity
papi83dmhow can I improve that?
papi83dmmy web app does a lot of select and then update
aradapilotpapi83dm: what do you mean by "max capacity" - cpu load high? disk bound?
papi83dmhigh cpu running at 200%
papi83dmthat's what shows when I do top
aradapilotpapi83dm: assuming mysql is the only thing using significant cpu, yes? check what % of that is iowait
aradapilotalso, run this
papi83dmERROR 1146 (42S02): Table 'performance_schema.events_statements_summary_by_digest' doesn't exist
aradapilotpapi83dm: you on a really old version maybe?
aradapilotlike more than a few years?
papi83dmmysql Ver 14.14 Distrib 5.5.47
papi83dmdon't think debian has any higher
aradapilotpapi83dm: https://dev.mysql.com/downloads/repo/apt/ or https://www.percona.com/doc/percona-server/5.6/installation/apt_repo.html
aradapilotwhichever you use
aradapilot5.5 still gets maintenance releases, but it hasn't really been relevant since <=2013
papi83dmI should do this off-hours
cmullicanWe have a couple of events that call SPs that call other SPs that call other SPs, perhaps another layer depending on the stage of the process. Sometimes, something in the chain gets a lock wait timeout and dies -- but not every time. The error is reported with the name of the event that launches the whole chain of processes. Is there a way to find out which query or process actually had the problem? (5.7.12)
aradapilotyeah. you can take a full slow log for a period, do a query digest on it, but that's a big pain
aradapilotit'll get you cpu time stats, to some exten
papi83dmaradapilot: would the upgrade corrupt the data?
aradapilotpapi83dm: no
papi83dmmy database size is 12GB
chilaquillitheum: I'm back sorry I got a call from someone
aradapilotpapi83dm: yeah, small db is fine, doesn't impact the upgrade
papi83dmany big changes that would break the size?
papi83dmi'm also running apache 2.4 and php 5.6 on the same server
papi83dmtrying to improve how mysql does the insert/update
aradapilotoh, running things besides mysql? generally shouldn't do that, but it can work - check their resource consumption, though
papi83dmi've been thinking to move the mysql server to a 2GB server
papi83dmi wonder if 2GB would be sufficient
aradapilotthat's below the realistic minimum sysem reqs for mysql, though you can turn a bunch of features off until it will run
passagepapi83dm: 2GB ram is a little low for a smartphone
aradapilotthe minimum mysql server you can get here is 48G, to compare
aradapilotup to 512G
aradapilotbut with such a tiny database, you don't need much
aradapilot16G will be totally fine, probably even less
papi83dm16GB is way out of my budget
papi83dmcan't even afford an 8GB server
papi83dm8GB = $80 dollars a month
papi83dmaradapilot: should I move to 5.6 or 5.7?
aradapilotup to you. 5.6 is still the most common, it's very solid. 5.7 is new, and exciting, but not necessary.
aradapilot5.7 has some great new stuff
speleoHey all. I'm trying to partition a DB over two drives by RANGE. I'm converting timestamps to ints by use of UNIX_TIMESTAMP, but it still seems to be telling me that the value needs to be of type int.
speleoAny ideas what I could do to fix this?
speleoExact error: "VALUES value for partition 'p01' must have type INT"
SmeefI have a compound index on two columns (house number, and street name). My query is using LIKE with only those two criteria specified, and there is a trailing wildcard after the text (not before). I'm also using query caching, but the results are still incredibly slow. Is there anything else I can do? Would reducing the data length of the fields make a noticeable difference?
passageSmeef: show your actual sql
passage!t Smeef idfk
ubiquity_botSmeef: Please paste your query, the EXPLAIN select.., and the relevant SHOW CREATE TABLE/s in one pastebin with the sql formatted so it is readable
speleoOkay, I found part of my problem. I was using fractional seconds in the UNIX_TIMESTAMP. Now that I fixed that, I get Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
Smeefwill need a moment, IRC is open on a different comp
SilentNinja1Hi any way I see which queries are incrementing the Create_tmp_disk_tables counter in the Global status?
SilentNinja1or creating the tmp_disk_tables?
raymondSilentNinja1: Not easily. :(
SilentNinja1Oh OK
SmeefSorry for the delay, here is a pastebin of the schema, query, and explain: http://pastebin.com/NajutuSz
SmeefI can also post info from the php.ini file on the server if necessary
SmeefMy main goal right now is to get these queries running as fast as possible
snoyesSmeef: what is your connection's character set?
Smeefsnoyes, where would I see that information?
snoyesSHOW VARIABLES LIKE 'char%';
snoyesI suggest two things. Swap the order of the index to (street, house). Then do a SET NAMES 'utf8'; before running the query.
Smeefsnoyes: cp850
Smeefsnoyes: thanks for the tip
arunpyasican we run multiple mysql version in same server ?
litheumarunpyasi: yes
arunpyasilitheum: how to ? will that consume more RAM ?
brat_what is the best way monitor queries i mean their execution time
snoyesbrat_: you have one particular query and you want to know its time, or you want to monitor all queries and gets reports across the board?
brat_monitor all queries
snoyesif you want to do it yourself for free, enable the slow query log, set long query time to 0, and spend some time learning to use mysqldumpslow
snoyesif you want to spend money and have queries captured and analyzed and reports created and pictures to look at and get alerts when things start to look bad, buy MySQL Enterprise Monitor or VividCortex
brat_and for a particular query
snoyesrun it in the cli and look at the time it shows
nisseni160419 22:49:16 [ERROR] /usr/sbin/mysqld: Incorrect key file for table '/tmp/#sql_d7a_0.MYI'; try to repair it - getting this error message, but I can't find that file
nissenithe tables are InnoDB which means no possibility to repair
passagenisseni: that looks like a copy of a myisam index file in the /tmpdir, like when you do an ALTER TABLE
passagenisseni: are you changing that table to innodb?
nissenipassage: no tables are myisam
nissenithey never wer
litheumnisseni: what version of mysql are you using
nissenilitheum: rebooted, now it won't even start
nisseniERROR: There's not enough space in /var/lib/mysql/ - this is what I get now
passagenisseni: df -h
nisseniI just upgraded the server at linode.com - not sure if I have to allocate more manually or something like that
nissenihonestly, that doesn't tell me much
nissenipassage: ?
passagenisseni: I don't understand your confusion. there's not enough disk space available for mysql server to start. df -h confirms it. make space
nissenipassage: and as I said, I just doubled the space with a linode upgrade
passagenisseni: ah, you're the magical myisam .MYI file guy. I'm done arguing with you. perhaps someone else
nissenipassage: eh? As I said, I knew I was out of space, so I upgraded the space with linode, which is the host. So I should have doubled the space. But it doesn't show with df -h
litheumnisseni: if df -h tells you you're out of space, mysql won't magically find it somewhere else
litheumnisseni: if you try increasing the size of your disk and that doesn't work, take it up with the folks who control the size of your disk. it's not a mysql problem.
nissenilitheum: got it
vachohey all.
vachoSQLSTATE[42000]: Syntax error or access violation: 1370 execute command denied to user 'app_user'@'%' for routine 'nas_dev2.GetDistance'
vachoGRANT EXECUTE ON PROCEDURE nas_dev2.GetDistance TO 'root'@'%';
vachobut I still get the same error... and yes, I have flushed privileges
thumbsyou don't need to run FLUSH PRIVILEGES
dragonheartapp_user != root ?
vachothumbs: ok, I will not next time.
litheumvacho: the error message refers to user 'app_user'@'%', but your GRANT statement refers to 'root'@'%'.
thumbsdragonheart: no, no, no, that's too obvious
vachodragonheart: good question, I don't know. I got this dump from a vendor. Very likely it is.
vacholitheum: I am not using app_user, which is the weird thing. My query is running as root
litheumvacho: presumably this routine has a DEFINER clause that means it runs in the context of 'app_user'@'%'
vacholitheum: I think that's true. That's what I found googling about the problem. I was hoping grant permissions to root would solve it
litheumwhy would you hope that?
litheuminstead you pretty surely need to grant some permission to the account referenced in the error message
vachooh I see what you mean, run in the context of app_user
vacholitheum: that account does not exist on my SQL instance. That's from the old system prior to migration. I want root to be able to run it. Is this possible?
litheumyou'll need to change the routine to have a different DEFINER... or you need to create that account and give it the correct privileges.
vacholitheum: ok thanks for the help. I will google that and see how that can be done.
blackmagicHow can mysql use 200-380% of the CPU?
thumbsblackmagic: multiple threads/queries running at once
blackmagicEither htop is reporting wrong, or its actually doing 200+% CPU.
blackmagiclike right now its at 431%
vacholitheum: UPDATE `mysql`.`proc` p SET definer = 'root@%' WHERE definer='app_user@%';
vachothat didn't work either.
dragonheartblackmagic: normally 100% is a 100% of one CPU. If you have more then you can exceed 100%
vachoSQLSTATE[42000]: Syntax error or access violation: 1370 execute command denied to user 'root'@'%' for routine 'nas_dev2.GetDistance'
blackmagicoh, so its 6 CPU's (cores) and the 300% is actually 100%x3?
blackmagicso its normal behavior, sweet.
blackmagicweird part is, ive never noticed it on any other server using the same PHP
vachoblackmagic: sounds like your hosting company upgraded
thumbswhat does php have to do with the mysqld cpu usage?
blackmagicwell, on the previous server it never did over 80-90% period. and on this one. its using 300+% constant.
vacholitheum, thumbs: any thoughts?
blackmagicvacho: i upgraded the server, better CPU, more RAM, and more disks on a hard raid over a soft raid
thumbsvacho: what is the definer on the routine now?
vachothumbs: http://pastie.org/10804421
litheumvacho: what version of mysql are you using?
litheumvacho: is there actually a root@% user?
vachoyes there is.
litheumvacho: you really should *not* have a root user with a wildcard hostname...
litheumSHOW GRANTS FOR 'root'@'%';
vacholitheum: https://www.dropbox.com/s/wzjqdipuews14oe/Screenshot%202016-04-19%2016.26.36.png?dl=0
thumbshere... let me post ascii text output on .... dropbox
litheumwoo screenshot of plaintext terminal
thumbshe should have compressed it, for good measure.
vacholol, sorry gents
vachoI can run it in terminal and paste again
thumbsI'm just astonished that folks actually go through those steps
vachothumbs: actually, it's the easiest for me. Shift+CMD+4 gives me a drawing area, it gets automatically stored in DB and gives me a share link.
vachoDB=dropbox. Probably not the best acronym to use in #mysql
litheumforget the screenshot part, i'm astonished that people use mysql workbench
vacholitheum: oh... yes it's pretty buggy. But a great way to connect to different DB's quickly. The import/export feature is pretty solid too. Plus it generated great ERD's
litheumvacho: you need to do GRANT EXECUTE ON FUNCTION instead of GRANT EXECUTE ON PROCEDURE
litheumbut you shouldn't have any of this based around root
litheumyou should get rid of that and create a different user to be the definer of this function
litheumcreate a root@% user just for this purpose is totally ridiculous
vacholitheum: root already existed. This is a dev environment, I just want the get the app upp and running.
thumbsvacho: the best environment to learn good practices is a dev environment!
vachothumbs: true..this will eventually live in AWS RDS, so I will write a script to handle all this.
vachoGRANT EXECUTE ON PROCEDURE nas_dev2.GetDistance TO 'root'@'%';
vachostill gives me: Syntax error or access violation: 1370 execute command denied to user 'root'@'%' for routine 'nas_dev2.GetDistance'
vachostored procedure and routines are synonyms?
vacholitheum: 'GRANT EXECUTE ON PROCEDURE `nas_dev2`.`getdistance` TO \'root\'@\'%\''
vacholitheum: when I run SHOW GRANTS
vacholitheum: my bad! I fixed it and it works!! :)
xa0zIs there a way to speed up mysqldump to local drive?
passagebetter cpu? faster drive?
dragonheartxa0z: if the single threaded aspect is the limitation rather than IO perhaps looking at mydumper
happygilmoregentI need some help setting up mysql in debian
happygilmoregentwhen I do mysqld --initialize and check the log copy the temp password and then attempt a secure installation it fails
narcarsissWhat is the name of that feature in MYSQL that prevents you from deleting a table if it is relied upon another table? I cant find it on google.
passagenarcarsiss: if there is a FOREIGN KEY CONSTRAINT relationship between the tables, you would get an error
narcarsisspassage: That's what I mean, Is there a name for that Feature?
passagenarcarsiss: FOREIGN KEY CONSTRAINT
narcarsisspassage: If i could not get that, it's deff to late for me to be coding. Thankyou :)
passageyw :-)
narcarsisstake care, ni nite.
jeffrey3234if i have a 100gb DB, and want to leave 1000 rows in each table, basically making it a sample DB, is there way to do this? basically drop every row from row 1001+ for every table, this is so we can give it to another company so they can analyse the schema
dragonheartjeffrey3234: is there an autoinc column to say SELECT ... WHERE MOD(id,1000)=0;
jeffrey3234i don't follow
Xgcjeffrey3234: You can't really do that blindly, due to constraints. You'll need to be smarter about it.
jeffrey3234well looking at the db there are only about 7 tables that hold 90% of the size, i may just do it manually
dragonheartmaybe pick out the FK criteria columns, create table XXX_ids AS SELECT id FROM XXX where MOD(id,1000)=0; and then pull out those on other tables when doing a dump.