LiuYanokay, it looks i upgraded from mariadb to mysql, there're extra columns (such as `is_role` in mysql.user table), let me drop that column and try ...
Phil-TheVaporisthola amigos!
Phil-TheVaporistquick question regarding mysql replication. Can you setup replication in a way that every time it takes place, it deletes and recreates a specific database on the slave server?
uid1Phil-TheVaporist: Replication is more of a constant, on-going thing. What you're describing is more like a dump and restore it seems.
Phil-TheVaporisttrue, but i'd like it to be automated
dragonheartdescribe a more general requirement and we probably can suggest something.
Phil-TheVaporistI can always drop the database and reimport the sql I guess
Phil-TheVaporistman..same guys as yesterday..youguys rock
Phil-TheVaporist:)
Phil-TheVaporistWas able to make the query I needed with self inner joins!!!
Phil-TheVaporistsaved myself lots of headaches
Phil-TheVaporistbasically for the replication problem
Phil-TheVaporistI'm running opencart which uses a Mysql db. I want to have a test environment where I will most probably break everything
Phil-TheVaporistI'd want to drop/recreate so that after I fuck everything up, I can restore to current live website state
Phil-TheVaporistand fuck it up even more
dragonheartyeh, a dump/reimport sounds reliable. that way it should survive most fuckup you do.
Phil-TheVaporistcaus let's face it..I WILL fuck it up :)
LiuYanss23: dragonheart: after I dropped `is_role` colume in mysql.user table, `flush privileges` works now. it's my mistake that I think MariaDB and MySQL are compatible... thank you guys.
Phil-TheVaporistother question, related to mysql but indirectly. Easiest framework to simply input data with validation/ view data via http?
dragonheartLiuYan: yes they have differences. yes YOU will need to check what they are to ensure they meet YOUR requirements. Yes that table difference is one difference. No I've no idea what your flush priv problem was or even why you are manipulating privs directly in the table. There are plenty of SQL comamands fo rthat.
wyounghey, for each record in one table I have a reference in another that refers to it a set number of times. I want to select the rows that reference the first as columns.
dragonheart!t wyoung joins
ubiquity_botwyoung: Very basic join information: http://hashmysql.org/index.php?title=Introduction_to_Joins - For more indepth info: http://dev.mysql.com/tech-resources/articles/mysql-db-design-ch5.pdf
wyoungdragonheart: if I join I get duplicate rows of the first table. I want results of the second to appear as columns
dragonhearti'm too lazy to work out what you mean
dragonheart!t wyoung sqlfiddle
ubiquity_botwyoung: sqlfiddle.com is an easy way to share your schema, your data, and the queries you're testing. It saves us time and makes it more likely that you'll get an answer! (Make sure you set it to use MySQL.)
wyounghmmmm
wyoungok
uid1wyoung: Sounds like what you want to do is best solved in application code but yeah, through something on sqlfiddle.com
wyounguid1: yeah I was thinking that although I thought there may be a way to express it in SQL
uid1I'd give it a look, but I'm imagining some ugly SQL.
wyounguid1: the columns in the 2nd table are id, key, value. The keys I want to appear as columns are known, there is also one key for each id in the first table
wyoungwould I need a seperate join for each known key?
wyoung(there are 10 keys)
stephenAnyone here skilled in mysql GIS?
wyoungstephen: I am skilled in postgres gis, imo more mature than mysql
wyoungbut mysql is getting there
dragonheart!t stephen ask
ubiquity_botstephen: You don't have to ask to ask, or state: "I have a question", Don't ask: "Is anyone around?" or "Can anyone help?". Just Ask The Question. Also, please read: http://workaround.org/getting-help-on-irc
wyounguid1: you need more info?
dragonheartwyoung: just get the sqlfiddle going.
wyoungok
uid1wyoung: yes please. On sqlfiddle.com you can create 2 tables and populate them with a couple sample records.
stephenI remember reading that mysql had implemented earth as the default geometry (sphere) for spatial functions, not just st_distance_sphere, It might have been dev/beta branch. Anyone know what I'm talking about? Looking for docs
dragonheart!m stephen spartial analysis functions
ubiquity_botdragonheart: Sorry, I have no idea about that manual entry.
dragonhearthttp://dev.mysql.com/doc/refman/5.7/en/spatial-analysis-functions.html
dragonhearthttp://dev.mysql.com/doc/refman/5.7/en/spatial-extensions.html
wyoungdragonheart: http://sqlfiddle.com/#!9/b8a30d/1
wyoungdragonheart: I would like to rewrite that so address, phone, description and memberid are columns and there will be only 2 rows displays (one for each entry in t1)
wyoungobviously there will be like 98 - 200 rows on my production run and there will be 10 different possible names.
wyoungdragonheart: will I need a join for each name I want to appear as a column? or is there another way to do it?
lamnethwyoung: can you modify the tables? This design is heading right into the brick wall!
jkavalik!t wyoung about eav
ubiquity_botwyoung: EAV is an antipattern: http://www.sheeri.com/archives/197 -- http://en.wikipedia.org/wiki/Entity-attribute-value_model
jkavalikwyoung, other than that, you can join the table multiple times, each join limiting for one parameter (if you know their numbers)
dragonhearthttp://sqlfiddle.com/#!9/b8a30d/7
dragonheartlike that ^
dragonheartyes its ugly. see previous by jkavalik
wyoungjkavalik: I can have a where clause in my join? or do I need to do a join on a sub-select?
wyounglamneth: no I cannot modify the table :(
jkavalikwyoung, ON has a function similar to WHERE but for joins, it can contain complex expressions too
jkavalikwyoung, generally just select the data with the duplicates and "merge" them in app (if the number of parameters per item is not so big) or do two selects, one for items and one for all their parameters without duplication using "t2.t1id IN (<list of previously retrieved t1 ids>)"
wyoungjkavalik: ah ok cool
wyoungjkavalik: can I use LIMIT 1 on a join?
wyoungon a RIGHT JOIN that is
jkavalikwyoung, no, only on entire query or in a subquery.. if you need to do that, its one of the hints the schema is in a need of redesign
jkavalikwyoung, what do you need the limit for? limit does not usually have much meaning without orderby and you cannot order join either, but there are other techniques
wyoungjkavalik: well there should only be one t2.name for a t1.id however this isn't enforced by schema.
jkavalikunique index on id,name would make it so.. with a proper delete before creating it.. and which one you want to get if there are multiple?
dragonheartcan you add attribute values to the t1 table and use triggers on the t2 table to keep them in sync (while the rest of the code that uses t2 is slowly migrated away)?
wyoungdragonheart: no, I cannot change the schema, I am stuck with it
stephen@dragonheart I believe the correct question I need to ask is if MySQL has implemented the Geography type anywhere
stephenOr if I can now assign SRID's other than 0
wyoungstephen: postgresql has, I am not sure why you are using mysql for geo stuff when it is so immature
stephenI'm not so much, I just have data there, and I for some reason remember getting past this hurdle before.
stephenI'd like to use ST_CONVEXHULL for points on earth.
stephenAnd then do ST_COVERS & Intersects...
wyoungdragonheart, uid1: How does this look (besides being ugly)? http://sqlfiddle.com/#!9/b8a30d/13
wyoungdragonheart, uid1: Any way to be sure that the t1.id column is unique? (only displays the first match if there is more than one t2.name row for a t1.id)
dragonheartwhen using crappy structures you can keep the special features that come with it. btw having trouble retrieving sqlfiddle atm.
wyoungdragonheart: yeah, sqlfiddle failed to display that 3 times, I created a local mysql database to test it
uid1wyoung: I can read it on sqlfiddle.com - you re-use alias t2 several times
uid1Also... the SELECT ... `value` AS alias isn't working which is strange
uid1Oh, I was mistaken - t2 is a table name - I thought it an alias. Sorry.
jkavalik!t wyoung groupwise max
ubiquity_botwyoung: http://jan.kneschke.de/projects/mysql/groupwise-max/ http://dev.mysql.com/doc/refman/5.7/en/example-maximum-column-group-row.html
jkavalikwyoung, ^this is a way to get "first" row from a join when there are multiple (with the "first" defined as newest/oldest/biggest or other way)
antiPoPwhen I do "SELECT ADDTIME('00:05:00', '00:00:59')" I get and alphanumeric string like 30303a30353a3539. why'
antiPoP?
wyounguid1: :)
wyoungjkavalik: MySQL 5.5
jkavalikantiPoP, works for me in console, what is your client?
dragonheartbroken :-)
wyoungdragonheart: would i be better off doing a LEFT JOIN instead of a RIGHT JOIN?
dragonheartthey are different. I've lost interest btw
wyoungantiPoP: you need to cast the string to a time?
wyoungdragonheart: heh, ok, carry on then, as you were!
wyounguid1: hmm, the query I have come up with runs pretty slow, any way to speed it up?
jkavalik!t wyoung use explain
ubiquity_botwyoung: http://dev.mysql.com/doc/refman/5.7/en/execution-plan-information.html and http://dev.mysql.com/doc/refman/5.7/en/explain.html
dragonheart!t wyoung query optimization
ubiquity_botwyoung: https://github.com/jynus/query-optimization
dragonheartdecent structure /cough
antiPoPjkavalik, I think is 5.1, but I'm using pma
jkavalikantiPoP, well, try commandline mysql client to reproduce, if that works, it is pma problem (one of many ;) )
antiPoPwyoung, to get the rows within 1 minute interval
dragonheartantiPoP: always checking on a mysql client before asking
antiPoPusing betwwen
antiPoPthanks, I just don't have console access there
dragonheartwe aren't your solution to a lack of a decent dev/testing environment
wyounguid1, jkavalik, dragonheart: with the right joins it taks 4 minutes to execute. With an inner join it taks 4 secs :\ but then I need to do extra processing in a script. I wonder if python is faster than mysql :)
dragonheartbecause its a different operation
antiPoPdragonheart, I know, I just didn't consider that it could be pma... and that why they gave me :(
antiPoP*and that's what they gave me :(
jkavalikwyoung, you ask it to do a different thing with a different join so the time is different too, decide which version is the one you need and then check the query plan and optimize the right query
wyoungjkavalik: both give me the data I need except the 2nd one I need to do more processing on it in python
Xgcwyoung: sqlfiddle is down, as usual. I didn't see your schema / SQL. Look into GROUP BY and GROUP_CONCAT
Xgcwyoung: It's an easy way to do what you want, for limited cases. The data from the second table will be combined into one column of the result, but you'll be able to process / parse it easily.
Xgcwyoung: The right way is to do that in your application. But, sometimes this is convenient.
wyoungXgc: hmmmm, I am outputting to CSV / XLS so I would like to keep those columns seperate
Xgcwyoung: This is perfect for csv.
Xgcwyoung: You can format that extra column in csv form, if you wish.
wyoungXgc: yeah I am going to do it in my script, use the inner join version
XgcGROUP_CONCAT will work fine for that. It requires some understanding of GROUP BY.
wyoungXgc: ok, I will take a look at it
wyoungthanx for the assistance
doevwhat need lees space, decimal(5,2) or int?
OcnodI have a feeling that is debatable on multiple contengencies
archivistdoev, premature optimisation
iateadonuthi
iateadonutwe've got a table of permissions that one developer on the team insists needs to be normalized. it is a one-to-one relationship.
iateadonutthe layout is something like: user_id | can_export_csv | can_see_all_customers |, etc
eversonHi, I need to order a list of integers being stored in a varchar column. The problem is that they're alphanumeric, so 25 is being shown after 201 as opposed to before. Is there a way around this?
iateadonutmembers of the team think it needs to be three tables, because 1) everyone else does it that way and 2) when it gets to more than 20 or 30 permissions, we'll need a different structure.
archivisteverson, use a numeric type field
eversonarchivist, please elaborate?
iateadonuti'm wondering if anyone has any input on this? why does it matter if a table has 30+ fields? are all the fields loaded into memory? i think the index would be loaded into memory and then just the single row that matches, right?
eversonarchivist, it's not my database so i can't change the type field
iateadonutand if there's any reason to 'normalize' this type of table with just a one-to-one relationship
archivistiateadonut, you are well below any limits
iateadonutarchivist, do you mind elaborating on what limits i am below?
archivisteverson, the only way is to add spaces in a string type, also not that sensible
eversonfound a solution: ORDER BY CAST(`column` AS SIGNED)
archivistiateadonut, number of fields
iateadonutright now we only have 10. it could grow much larger.
iateadonutwhat do you think the limit of fields should be? is there any other reason to normalize such a table?
archivistyou may want to group permissions, that is when you add tables
iateadonutactually, we are grouping permissions, but there are no user permissions
iateadonutso if you want a user to have specific permissions, you create a new group.
iateadonutand then assign that user to the group.
jkavalikiateadonut, the problem is not with the limit (for now) but that defining new permission means altering the table, which may be very costly operation when there are many users, instead of just inserting few rows with new values
archivistso grouped perms will be a few rows in another table instead of EVERY row
uid1iateadonut: If the list of permissions is to be presented to an admin, maybe adding new ones -- easier to add one record to separate table than another field.
archivistthis is about ease of updating a groups permissions, one row in a table instead of all rows in the user table
jkavalikone table for permission type/name and one for user_has_permission (or group) - thats normalized, easily extendable with new permissions and it is sparse - only granted permissions take space, not granted ones do not need to be stored at all instead null/false being stored now
iateadonutarchivist, yes, sorry. there is actually a groups table.
iateadonutand the group_id is put in the user table
iateadonutwith 8 permissions with a binary switch, i guess there is a possibility of 256 groups.
iateadonuteven with 12 permission the possibility is only 4096; not exactly a large table.
doevarchivist: what do you mean exactly with "premature optimisation"? Do you think it has not really an effect on performance?
archivistdoev, use the right datatype for the data, worrying bout its size is secondary
archivistmany make the mistake of thinking about the size and the application dies later due to insufficient space for reality
doevarchivist: even with 800 million records?
archivistrecords of what
doevarchivist: records where decimal(5,2) could be replaced with int
archivistwhat is in the decimal?
doevthe decimal must be multiplied with 100 of course
archivistwrong answer
doevwell, ... in the decimal, there is a decimal
archivistif money then decimal AND make it large enough
doevah, its a height value.
archivist5,2 seems far too small for any reality
archivistis it height in mm, yards, miles, furlongs, hands
archivistheight of land in Holland will be negative
doevarchivist: its in m
wfqhi all
archivistdoev, so you can only just fit 829.8 m (2,722 ft) tall Burj Khalifa in Dubai
doevarchivist: yes
doevactually no problem
aliencatHello, I am trying to select all that are not in a certain group but the problem is if I am in both 1 and 2, my profile would still be listed since being in group 2 is not being in group 1
aliencathttp://hastebin.com/uqojapexuc.sql
aliencatI guess I can solve it with a sub select or something
no_gravityI have a table with votes to add or reject candidates. Two text fields "name" and "vote". So a row with "Mr. Superduper" and "add" is a vote to add Mr. Superduper to the interplanetary party. Can this query to show the top voted 10 candidates be optimized? http://pastebin.com/NSU7Uibc
no_gravityAt the moment it taks 1.5s for 100k votes.
no_gravityInteresting... adding a combined key for "name" and "vote" brings that down to 0.3s
no_gravityWonder if it can be speed up further.
no_gravityIt's still using a temporary table and filesort.
Ocnodwhy not use int signed
Ocnod-1 for negative, 0 for neutral, +1 for positive
no_gravityWould that make a difference? In fact, the "vote" field is an enum field I just noticed.
Ocnodhaving int 1 signed for only positive and negative comparison, yah, I think index that and you would be ideal
no_gravityOk, so I might have to live with the 0.3s then.
no_gravityFunky, the order of the combined index is important too. vote_name does not speed it up. name_vote does.
jkavalikno_gravity, order of columns always is important in index
no_gravityI see
no_gravityI wonder why it still uses a temporary table.
no_gravityI would expect finding the top 10 of something would not need that.
jkavalikno_gravity, with (name, vote) your group-by can use the index efficiently and the index is covering because only name and vote are being used in that query
no_gravityOn the other hand.. maybe it *is* needed.
no_gravityAs every candidate could still be in the top ten until the whole table scan is done.
jkavalikno_gravity, but there is not much more you can do because the order is different than the grouping - the first N can be only found when all votes are counted
no_gravityyup
jkavalikthe limit can be optimized in cases you order on a "known" quantity (specific column), not result of a function or grouping aggregate
jkavalik!t no_gravity use explain
ubiquity_botno_gravity: http://dev.mysql.com/doc/refman/5.7/en/execution-plan-information.html and http://dev.mysql.com/doc/refman/5.7/en/explain.html
no_gravityjkavalik: I use explain all the time.
jkavalikno_gravity, yep, I noticed after posting that you wrote about filesort already
jkavalikno_gravity, now you have something like "using index; using temporary; using filesort" with full index scan?
no_gravityjkavalik: yup: 1 SIMPLE votes index name_vote 256 94622 Using index; Using temporary; Using filesort
no_gravity0.3s for 95k votes.
jkavalikI don't see any better index or simple way to make it faster than that
no_gravityOk. Thanks.
jkavalikyou could use a table to keep the aggregate and update it with triggers, so the numbers are always actual and always ready to be read, but it may not be worth the trouble anyway
no_gravityTrue.
no_gravityThe system runs just fine like that. For over 10 years now :)
no_gravityAnd today I speeded it up 5x with the index.
jkavalikso good for next 10 years :D
no_gravityYeah
jkavalikhow big is the table (in MB or GB) and how big is your buffer pool? does the index fit in there?
no_gravityIt's just 100k votes. So the index should fit easily.
no_gravityDid you see my pm?
jkavalikI did, interesting project, just keeping the technical part in here - thats why I am asking about data size, not number of rows, but yes, unless you are significantly memory limited, it should be good
no_gravityYeah. And that list is not generated pretty often.
no_gravityAnd unless new votes came in, it comes from MySqls query cache anyhow.
Simon1234I want to restart mySQL server completely, but it say too many connections.
no_gravityIs there a rule of thumb what the ratio of reads to writes in a DB is to make indexes worthwhile?
passageno_gravity: there is a maintenance cost for indexex, but performance improvement can be 1,000x. so, don't keep indexes you don't use
no_gravitypassage: Well, that was already clear to me. My question is where is the cutoff.
passageno_gravity: learn to use the EXPLAIN capability. enable the slow query log (including queries not using index), use EXPLAIN on those
passageno_gravity: the cutoff has more to do with cardinality, not read/write
no_gravityYup, I can measure and test and explain.
no_gravityBut I am thinking about the general rule of thumb.
passageno_gravity: your concern about read/write is a good one for query cache, though :-)
no_gravityWell, that's another layer.
no_gravityI'm pretty sure that 1 billion reads for every write will almost always warrant an index.
no_gravityAnd I'm pretty sure that 1 read for every one billion writes will almost always mean you are better of without an index.
no_gravityBut how about the range in between?
no_gravityHow about the situation where writes are as common as reads?
passageno_gravity: no, a where clause on a column with high cardinality (uniqueness of data) calls for an index
passageno_gravity: I think you're looking at the wrong thing. perhaps someone who agrees with you can chime in
no_gravityIf you have a small table and billions of reads per write, then no - that does not call for an index. No matter the uniqueness.
passageno_gravity: you're good then
no_gravityBillions of writes per read i mean.
cortexitHi all, I have a question please, what is the best way to export a db from a server to another everyday ? I want to automate this action. Thank's
no_gravitycortexit: One way is to write a bit of code for both servers.
no_gravityExport via mysqldump on the one side and pipe it into mysql on the other.
cortexitno_gravity : And I made a cron for automate ?
cortexitI make sorry
no_gravitycortexit: Yup
cortexitOk thank you
sallecortexit: If you want automation why don't you set up Replication?
sallecortexit: MySQL replication is built in and can take care that your slave server replicates all changes from the master
no_gravityIs there a way to calculate the time it takes to insert a value into an indexed field?
krylhi
no_gravityCompared to a read of a field that is not indexed.
salleno_gravity: You can't insert values in fields. You can only insert rows
no_gravitysalle: What I mean is: if one of the fields in the row has an index on it, then additional time will be spent on updating the index.
salleno_gravity: Indeed
no_gravityI wonder how that compares to a read on the table with an unindexed field in the where clause.
salleno_gravity: All indexes must be updated when you insert new row
krylThe following SQL query instantly returns the result: SELECT name FROM users WHERE id_user = 412; but the following takes a long time to return: SELECT email, address FROM user WHERE user = 'Smith'; What can be done to get faster result from the second query?
no_gravitysalle: Well, lets say we have a table with just two string fields.
no_gravitysalle: Without indexes, inserts will be almost "free".
sallekryl: Missing index. Pastebin results of EXPLAIN and SHOW CREATE TABLE
no_gravitysalle: With indexes, reads like WHERE name='joe' will almost be free.
salleno_gravity: So what? Indexes help speed up certain queries and sometimes the difference is huge
no_gravitysalle: So my question is about the ratio between the write speedup of noindex and the read speedup of index.
salleno_gravity: SELECT from row with 10 million rows which returns only 5 rows can take miliseconds with index and hours or even days without index
salleno_gravity: The insert penalty caused by the index with such table will be negligible
salleno_gravity: Every case is different
no_gravitysalle: How can "SELECT * FROM t WHERE name='salle'" take hours with 10 million rows?
no_gravityLast time I tried my computer was able to scan 10 million strings quite fast.
salleno_gravity: It easily can if there is no index because the server must scan all 10 million rows
no_gravityThat must be a very very old computer.
krylok the solution could be to index the user field... and I'll lost time for writing ? is there major difference between innodb / myisam ?
salleno_gravity: scanning 10 million strings stored in text file is quite different from scanning table with 10 million rows
salleno_gravity: Row size can be much bigger than your string
no_gravityWell, I said 2 text fields.
no_gravityLets say two char fields.
sallekryl: Huge
salleno_gravity: TEXT is very slow for sure
no_gravitySo the rows cannot be very big.
krylI guess using myisam will be faster in this case ?
no_gravitysalle: char
passage!t kryl about use innodb
ubiquity_botkryl: <archivist_emc> busy sites with inserts should use innodb <domas> very busy sites with no inserts should use innodb too <thumbs> idle sites with no inserts should use innodb too <adaptr> servers without mysql should use innodb too
no_gravitysalle: I feel like you are avoiding the question for some reason.
salleno_gravity: Which question?
no_gravitysalle: I think we are talking in circles.
krylah ah
salleno_gravity: Nope
sallekryl: You can write a book about difference between MyISAM and InnoDB and this book will be quite big one :)
krylit will be more a theorical question on how to speed up this kind of request when developers decide to take short path and select by different fields without index... does the only answer is create an index on these fields.... ?
passagesalle: scroll up. I wrestled with Mr. Crazy about an hour ago.
krylsalle, so forget this question it's not the main one
sallekryl: Most of the time yes it is.
sallekryl: Learn how to use EXPLAIN
salle!m kryl explain
ubiquity_botkryl: See http://dev.mysql.com/doc/refman/5.7/en/explain.html
salle!m kryl using explain
ubiquity_botkryl: See http://dev.mysql.com/doc/refman/5.7/en/using-explain.html
sallekryl: It tells you how the optimizer plans to execute your query. With the second query you asked about most likely it would tell you type=ALL
maestrojedI found mysql not responding this morning. I checked to see if the MySQL server is running. It was not. I try to start it and I get "start: Job failed to start". Any tips on how to debug this
krylok thank you guys ;)
krylI think it's the most famous problems with developers & mysql. They try to ask admin to resolve it ...
maestrojedUpdate: my server ran out of disk space. That cause MySQL to crash. Now I have resized my server but I still can't start MySQL. Any suggestions?
passagemaestrojed: examine the mysql error log following the most recent restart attempt
maestrojedpassage I checked /var/log I see a few log files that seem to belon to mysql but are empty. Do I need to enable logging?
maestrojedI can google that
passagemaestrojed: the error log is on
passagemaestrojed: debian-based systems route mysql errors to syslog.
passagemaestrojed: at least, they did before systemd
maestrojedok, give me a minute to too. Thanks! passage
moephi
moepIf I use the mysql-client in a bash script, I'm losing my prepared statements, if they are not re-transmitted in every connection; is there a way to keep the connection open, or have the statements persist over multiple connections until I deallocate them manually?
cyberfab007Hey room
cyberfab007Hows the morning , long time no chat :)
thumbscyberfab007: how can we help you?
cyberfab007whats some good software to view what queires are being run in mysql
thumbscyberfab007: enable the general query log, and tail it.
cyberfab007yes that sound easy enough
pyoorHi all. In a multithreaded application, I need to select a row where condition exists, then update a column in that row. To prevent race conditions, should I use a transaction or SELECT... FOR UPDATE?
thumbspyoor: both.
thumbspyoor: either way, you should use InnoDB for the table engine
pyoorthumbs: gotcha. thanks!
cyberfab007thumbs: is it called general_query_log in my.cnf
thumbs!m cyberfab007 query log
ubiquity_botcyberfab007: See http://dev.mysql.com/doc/refman/5.7/en/query-log.html
mehworkis it correct that JOIN's in mysql always create derived tables?
thumbsmehwork: no.
mehworkat least in the sense that you can't access outer tables, like you can with correlated subqueries
passagemehwork: derived table means something very specific
thumbsmehwork: a derived table is the result of a subquery.
passagemehwork: you can think of a derived table as an aliased subquery in the FROM clause. the resultset of the subquery creates a 'table' that you address using its alias.
passage!t mehwork about derived table
ubiquity_botmehwork: SELECT a.name, b.name, IFNULL(b.cnt, 0) AS cnt FROM tblname AS a LEFT JOIN (SELECT name, COUNT(*) AS cnt FROM tblname2 GROUP BY name) AS b ON a.name = b.name;
EGregI ran into a weird-butt bug with mysql!!
EGregI wondered how my app could have made such a state but then when I manually tried to fix it in the database I hit the same error. Watch this screenshot:
mehworkok
EGreghttps://www.dropbox.com/s/ffphgdw0z800w7z/Screenshot%202016-07-19%2011.55.05.png?dl=0
EGregI get error: Duplicate entry 'Patients-Patients/referrer/Qw--Patients/admins' for key 'PRIMARY'
mehworkit's confusing because this top answer said that the join is a derived table https://stackoverflow.com/questions/10410152/how-to-access-outer-table-in-a-left-outer-join
EGregThere is no such row!
thumbsmehwork: don't rely on stackoverflow.
sallemehwork: stackoverflow is wrong about MySQL most of the time
EGregBut notice that it cuts off the field value in the primary key
EGregmaybe that's why
mehworkok, they're usually wrong about bash too on SO
salle!t EGreg enter
ubiquity_botEGreg: The enter key or return key is not a substitute for punctuation. Use a period '.', comma ',', colon ':', semi-colon ';' emdash'--', or elipsis '...' instead. In other words: DON'T HIT ENTER UNTIL YOU'RE DONE.
EGregWhy would Mysql cut off the field value to just two characters?
thumbsmehwork: the result of a join is another table.
EGregHere is the table: https://gist.github.com/EGreg/7d943b902634bb0180fad9ec12a25a1c
EGregHere is the error: Duplicate entry 'Patients-Patients/referrer/Qw--Patients/admins' for key 'PRIMARY'
sallemehwork: Derived table means subquery in FROM clause. SELECT ... FROM (SELECT ...)
EGregThe thing is, the streamName is being cut off!
EGregThere is no such duplicate row
sallemehwork: Since the result of each SELECT is a table you can use it instead of base tables
EGregdo you understand my issue?
mehworkok, but what is the other thing i said true? That you can't access outer tables from inside joins?
sallemehwork: What do you mean by outer table?
sallemehwork: And what you mean by "from inside joins"?
thumbsmehwork: are you talking about the correlated behaviour?
EGregthumbs: I think you might know this
EGregwhy would I get this error in MySQL when there is no duplicate row:
EGregDuplicate entry 'Patients-Patients/referrer/Qw--Patients/admins' for key 'PRIMARY'
mehworksalle: select * from A LEFT JOIN B [can't access outer table "A" in here] ON ... ?
mehworkthumbs: yeah
thumbsmehwork: wrong.
sallemehwork: Who says so?
thumbssalle: stackoverflow, naturally.
mehworki got an error unknown column when i tried, but it was using an alias for A, maybe that's why
EGreghttp://stackoverflow.com/questions/5224040/myisam-unique-keys-being-cut-off-at-64-bytes-causing-collisions ?
sallemehwork: At this place you can only have ON clause and in this ON clause you *can* access A table
thumbsmehwork: if you alias the table, you must use the alias to reference that table.
EGreghttp://stackoverflow.com/a/5224834/467460
salleEGreg: Stop that please
thumbsEGreg: I don't care much for stackoverflow links.
EGregStop asking my question? FINE
salleEGreg: Stop pasting stackoverflow
EGregI already asked my question several different ways
mehworksalle: right but that's what that stack overflow link says too, that you can only access the outer table in the ON class, not *inside* the JOIN
mehworkjust wanted to confirm, thanks
thumbsmehwork: forget about stackoverflow. Close that tab now.
EGregwell then help me please
sallemehwork: There is no outer table when you join tables
EGregDuplicate entry 'Patients-Patients/referrer/Qw--Patients/admins' for key 'PRIMARY'
thumbsEGreg: I saw your question. Be patient.
sallemehwork: There is no such thing as "inside the join"
EGregThat is the error I get when I try to enter ANy row with streamName beginning with "Patients/referrer/Qw"
salleEGreg: What is the problem with that?
EGregbecause, salle, there is no such duplicate row
salleEGreg: Duplicate key error is straightforward.
EGregbut there is no such duplicate row
salleEGreg: If MySQL says there is such row I'd rather trust it
EGregthe reason the DMBS thinks so is that it cuts off the value in my field
mehworksalle: there is as far as the overall sql, maybe not the results
EGregMySQL also shows there is no such row after a direct query
salleEGreg: Pastebin that query with the result
EGregSELECT * FROM streams_access WHERE
EGregstreamName >= 'Patients/referrer/Qw'
EGregand streamName < 'Patients/referrer/Qx'
mehworki trust you guys more than SO btw, or i wouldn't be here confirming. IRC is always better
salleEGreg: only first 20 chars of streamName are within the Primary Key.
EGregWHY?
salleEGreg: Pretty unusual, but valid
salleEGreg: Why? It is your table. Why are you asking us
salleEGreg: Look at the PRIMARY KEY at your https://gist.github.com/EGreg/7d943b902634bb0180fad9ec12a25a1c
thumbsmehwork: now, it would be a good idea to make sure you undertand the correlated behaviour too
passageit's always the guy with 3 weeks of mysql experience screaming about finding bugs :-)
passageand I mean screaming :-)
sallemehwork: I still don't understand what are you asking about
thumbssalle: he's confused about the correlated behaviour
EGregsalle: https://gist.github.com/EGreg/7d943b902634bb0180fad9ec12a25a1c what about it
sallemehwork: When you join tables the join produces new table which combines rows from the original two
EGregoh I see!!
EGreg20
EGregthanks
sallemehwork: There is no such thing as "inside join"
mehworksalle: i get what you're saying now i think
salleEGreg: PRIMARY KEY (x(20)) means first 20 chars are unique. The rest can be duplicated
sallemehwork: That's good, but I still don't get what you said before :)
EGregthanks for finding the bug :)
salleEGreg: There is no bug
thumbsthe bug is the user.
passagelol
mehworkit makes no sense to say `SELECT * FROM A AS foo LEFT JOIN B as bar SELECT foo.whatever ...` because foo.whatever is part of the FROM's results, not the JOIN's results
mehworkthat's all i was really wondering
thumbsmehwork: you can't have two select clauses in the same statement.
thumbsmehwork: do you need a sql tutorial?
mehworkno i just forgot the parens
passagemehwork: still wouldn't work
sallemehwork: It makes sense to write: SELECT ... FROM a AS foo JOIN (SELECT ...) AS bar ...;
passagemehwork: you'd need another JOIN
sallemehwork: However it will make little sense if any to refer to foo from within the derived table
sallemehwork: Why would you want to do that?
thumbsmehwork: do you understand the correlated behaviour? Can you show an example?
mehworksalle: just wanted to have a condition based on the outer table's column, but i don't need to since i can use it in the ON clause to filter things down and grab it in the outer table's select
sallemehwork: Makes no sense you know
mehworki'm agreeing
mehworkjust took me a while to see why it didn't make sense
sallemehwork: How are you goign to do it if you were the server?
mehworkit's not that i need a tutorial, it's that i need better sql tutorials. They all seem focused on the absolutely basics and don't go into derived tables much
sallemehwork: Get pen and paper and write down two small tables with 2-3 rows each. Then try doing what you want to do with this query
sallemehwork: There are plenty of SQL books and tutorials about advanced stuff
mehworkplease link me
sallemehwork: It will help if you tell us what kind of query you are trying to write.
sallemehwork: Chance is high you don't need derived table at all
mehworksalle: on sec i'll show you what i'm trying to do
mehworksalle: I have two tables: 'signups' and 'logins'. I want the count of people who logged in within 30 days of signing up. The 'logins' table contains the times of their last logins. The 'signups' table contains the times they first signed up.
mehworki can join on user_id
sallemehwork: Sounds like simple join to me
sallemehwork: inner join by the way not outer
mehworkalso, the logins table contains where they logged in from and i need a WHERE condition to filter out people to just who logged in from a mobile device
mehworkthat's the main reason i used a derived table in the join
thumbsmehwork: wrong assumption.
sallemehwork: SELECT .. FROM signups s JOIN logins l ON s.user_id = l.user_id WHERE l.login_ts <= s.signup_ts + INTERVAL 30 DAY;
sallemehwork: add AND device='mobile' or whatever the condition is and you are done
sallemehwork: The above will give you all such logins so you probably want SELECT DISTINCT user_id FROM ...; or user_name or whatever else
mehworkok thanks, let me try that
sallemehwork: I would use INTERVAL 1 MONTH, but that's another story
sallemehwork: Now .. if ther are lot of rows in logins it could be faster to use derived table to get first logins only
sallemehwork: If you are familiar with GROUP BY it should be obvious. If you are not better learn about GROUP BY queries :)
sallemehwork: You need LEFT JOIN only in case you want *all* signed users including users who never logged in
sallemehwork: By the way l.login_ts <= s.signup_ts + INTERVAL 30 DAY; is obviously wrong. You need BETWEEN there
mehworkl.login_ts BETWEEN l.login_ts AND s.signups_ts + INTERVAL 30 day ?
sallemehwork: BETWEEN s.signups_ts AND s.signups_ts + INTERVAL 30 day
mehworkis s.signups_ts + INTERVAL 30 day the same thing as DATE_ADD(s.signups, INTERVAL 30 DAY) ?
sallemehwork: Yes same thing and in my opinion more readable
wfqhi all
mehworkalright thanks for all the help, i've learned a lot
sallemehwork: Depending on what you want to select there are differnt ways to write your query. If only user info join is enough. If you want to select the timestamp of first login join with GROUP BY or derived table with GROUP BY
wfqhow could I debug Mysql in order to understand what amount of memory is being used all the time. A php process running a query is getting killed all the time. It gets killed right after 2 selects that bring only 200 records (floats, and numbers) are executed. "dmesg" is telling me all the time this: mysqld invoked oom-killer:
sallemehwork: Try this using login table only: SELECT user_id, MIN(login_timestamp) FROM login GROUP BY user_id;
wfqfree -m shows 1.6 GB available
wfqI don't get what is going on here
sallemehwork: You can do the same with JOIN, but you have to be careful what you put in SELECT
mehworkok
sallewfq: reduce some buffers
wfqThe whole thing worked well until all of a sudden the php process got killed and from then on, that is it. There is not way to get it running
salle!t wfq mysqltuner
ubiquity_botwfq: http://rackerhacker.com/mysqltuner/
sallewfq: Try this script and see what it will tell you
wfqsalle, sorry what do you mean please? This is a testing server and there is no Mysql queries at all except those of mine. How could it be possible taht only 200 records break the whole thing
wfqok, checking
mehworksalle: wait, i do need derived table then in order to be able to MIN(login_timestamp) and group by then right?
sallewfq: Yes badly written query can cause excessive RAM usage even if the tables involved are small
sallemehwork: Nope. At least not always
sallemehwork: Depends on what exactly you want to select. If you need columns from all the tables you join including some info about that first login then you need one of the groupwise-max methods
salle!t mehwork groupwise max
ubiquity_botmehwork: http://jan.kneschke.de/projects/mysql/groupwise-max/ http://dev.mysql.com/doc/refman/5.7/en/example-maximum-column-group-row.html
sallemehwork: s/max/min/g for your needs
wfqsalle, http://pastebin.com/shTvmQmp. Interesting tool
wfqsalle here you can see the report, the three tables involved and the queries. I don't see anything here extremely wrong as to create this sort of problem
wfqsalle, as you can see the database is only 7M. It is just ridiculous
wfqThere was a cronjob working beautifully until stops completely. I can't see anything wrong
wfqhow could I determine what is going on here?
wfqthe slow query log isn't showing anything
wfqfuck sake
wfqguys would you consider this a slow query: Query_time: 0.125391 Lock_time: 0.002474 Rows_sent: 1333 Rows_examined: 646588. This corresponds to this: select b.entityId from balances b join entity e on b.entityId=e.entityId left join ledger l on b.entityId=l.entityId;
wfqI just don't know what to look at in order to understand why mysql gets killed
thumbsexamining 600k rows is terrible.
thumbswfq: what does EXPLAIN say?
wfqthumbs, http://pastebin.com/4ymQJ22p. Yes looking for +600,000 looks terrible. The tables involved has 1300, 1100 and 580 rows respectively
thumbswfq: umm, pastebin.com times out. I'll try later.
wfqthumbs, just realised that one of the fields in one of the tables is not indexed in spite of being a foreing key - this is the bloody legacy code however that wouldn't explain +600000 queries?
wfqhttp://paste.ubuntu.com/20062526/
wfqsorry the 600000 examined rows
thumbswfq: yes, you should index the columns used in ON clauses.
wfqin addition to doing that, that would not explain the 600K lookups. That would slow down the query but not have a effect on the amount of rows examined?.
wfqIt is the left join what is killing the join
wfqwhen it is an inner join the lookups is only around 2000
wfqthumbs, I am afraid I won't be able to do much with it if I need a left join
thumbswfq: on the phone
wfqsolved it! It actually was the indexing thing. Now the total examined rows are +3000
wfqwell it is not solved! Still the Mysql process gets killed! I have now reduced the lookups from +600000 to +3000
aradapilotwfq: mysql gets killed? anything in the error log?
Sammitchso I've got a single DB server that's migrating to a different environment where it will be a master/slave pair. assuming no latency/bandwidth issues between the pair, is it better to restore the backup to both servers and then set up replication, or set up replication and restore to the master?
aradapilotif you already have a copy of the db, just restore it to both
aradapilotjust set up replication before you start write traffic
aradapilotif you need to do it staggered, and have the new master be written to while you work on the new slave, then go the other route as it's not that much harder but you won't risk making a mistake
Sammitchwell, when I say I've got "a single DB server" I've actually got ~400. so I'm automating the process, but doing things in parallel is not my forte :I
aradapilotthe more foolproof way to do it is to set up new master, then xtrabackup copy it to the slave to set it up. a bit more work by hand, but you want to script it if it's a couple hundred like that
aradapilottakes slightly longer, but no risk of messing up and having to redo it
aradapilotif the db is small you can stream mysqldump|mysql too, less steps to script but slow and can fragment large dbs
raymond!t me about orm
ubiquity_botraymond: Some required reading about ORMs can be found here: http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx
sagestoneI have some data such as this: http://hastebin.com/enubiwicon.md
sagestoneI want to make a new column called `date_rank` and get MySQL to assign a rank based on a user's earliest game played as the first game played, then any subsequent games played by that user to be ranked 2,3,4 etc etc. Any pointers on how to go about doing this?
samotarnikhey everyone, i'm writing some queries an a certain select comes up very frequently. how can i abstract it so i don't have to repeat it all the time? are stored procedures the correct answer?
samotarnik*and a certain select
nvidhivea view?
samotarnikactually it takes "parameters"
samotarniki'm querying relationships but always for a different initial id
lamnethSamotarnik : your links don't work. .
samotarniklamneth: ?
lamnethsamotarnik: do you have a pastebin so we can help?
samotarniklamneth: right...
samotarnikfor instance, i wrote this procedure, that comes up all the time in my queries: http://pastie.org/private/0tlcctjxo5y9rebutfvvrw but now, i would like to use it in in a higher lever query, i.e. do sth. with it's results and i can't do that using procedures if i understand correctly
samotarnik(input params are of course always different)
lamnethsamotarnik: why don't you create a view?
samotarnika different one for every possible combination of input params? or how?
samotarniki guess i don't know how to do it properly... ?
Sammitch11:42 < aradapilot> if the db is small you can stream mysqldump|mysql too, less steps to script but slow and can fragment large dbs
Sammitchwhat do you mean by "can fragment large dbs"
aradapilotwell, a logical restore can result in secondary keys becoming fragmented, as they're only loaded in PK order
aradapilotnot generally a huge deal
aradapilotif you weren't planning for it before, it's likely nothing to worry about
Sammitchhrmm, well that's really only a problem if I under-size the ram and mysql has to keep going to disk for indexes, yes?
aradapilotno, it affects both memory and disk
aradapilotbut a degree of it is expected
aradapilotit's just that using logical dumps can suddenly change the degree you're used to
SammitchI assume that the only way to defragment a secondary index would be to drop and re-add it?
aradapilotbasically. or any alter that rebuilds the table.
aradapilotbut, say, if you have an index that currently takes 50G, and do a logical dump, it might suddenly be 100G (and proportionally slower to use)
aradapilotso it's something to think about
aradapilotthat'd be the most possibly drastic change, but, just an example to show
Sammitchno indexes *that* large. I think our largest DBs have roughly 20GB of indexes in total.
aradapilotso yeah, doubt it'll have any significant effect, but keep that in mind
Sammitchon reflection I think that the nature of our data will keep a more or less constant state of fragmentation
aradapilotif you want to learn more, check out jeremy cole's innodb_ruby suite, which gives you a nice view into yourr indexes
aradapilotyep, that's expected, it's just a sudden change that can be a problem
SammitchI'll keep it in mind, thanks for the info! :D
aradapilotcheers
mgriffinis there some optimizer version that makes OR as fast as UNION without a query rewrite
subsumeif i have table pk, id1, id2 how can i get all unique combinations of id2 per id1?
thumbssubsume: SELECT DISTINCT id1, id2 FROm tbl
subsumei don't think that's quite it, maybe i am not stating well enough
thumbssubsume: yes. Start over.
thumbssubsume: or maybe SELECT id1, GROUP_CONCAT(id2) FROM tbl GROUP BY id1
subsumei have a lot of id1 and about 300 different id2s and i want to know how many unique clusters i have of id2
thumbssubsume: that's a different requirement.
subsumeyeah but the group concat seems closer
thumbssubsume: SELECT id1, COUNT(DISTINCT id2) FROM tbl GROUP BY id1
subsumewon't that show me a row per id1?
thumbssubsume: clarify on "how many unique clusters i have of id2"
thumbssubsume: show the literal output you're expecting.
subsumeyeah starting a dpaste
thumbsmgriffin: umm
subsumehttp://dpaste.com/2DBDTNY
thumbssubsume: what does '1' represent here?
subsume1 unique id1 found
thumbssubsume: that's a bit pointless. So you just want a comma-separated list of the values from id2?
subsumei'm just trying to find out how many different combos i have in this table basically
thumbssubsume: SELECT GROUP_CONCAT(id2), 1 FROM tbl GROUP BY id1
subsumei need the 1 to be a count, though
thumbssubsume: it's always going to be 1.
subsumeupdated paste :P http://dpaste.com/3TB9ZDY
thumbssubsume: SELECT id2, COUNT(*) FROM tbl GROUP BY id2
subsumebut that doesn't give me combos of id2
thumbssubsume: use your application to represent those.
subsumeits important here to count how many of the id2 have "shared" id1
thumbssubsume: this isn't a task that sql is well-suited for
subsumeokay, if i supply an id2 do you think you could help me with that query?
thumbssubsume: it's mostly nonsense. so probably no.
subsumeyeah i guess its a toughie for sql
subsumewelp, thanks a bunch
thumbssubsume: here's a hint: use a subquery, and GROUP BY
tomvolekHI : I had a running mysql, now all of a sudden I am getting this message when I try to login to it :
tomvolekmysql
tomvolekERROR 2013 (HY000): Lost connection to MySQL server during query
thumbs!t tomvolek gone away
ubiquity_bottomvolek: http://dev.mysql.com/doc/refman/5.7/en/gone-away.html
tomvolekthanks
Lenclhello
Lenclhttp://pastebin.com/QYmeF3LD
Lenclthe KEY `item_id` (`item_id`) is a foreign key right?
Lenclisn't that what it means
litheumLencl: no, it's just an index.
dragonheartno, its an index
litheumLencl: you have not defined any foreign keys, and myisam does not support foreign keys anyway
Lenclohh that's why foreign key didn't work then
Lenclit just even tell me
lawhey all, I'm trying to translate a mysql 5.5 config file into a mysql 5.7 config file, and having some issues
lawmain one is that 5.7 does NOT like the 'key_buffer' directive - it won't restart with that defined
thumbsperfect reason to ditch MyISAM, law
lawtrust me, it's happening tonight
lawbut first I've gotta get the config translated over, get the data loaded from the legacy DB in, ALTER TABLE on the last vestiges of myisam, etc
passagelaw: then just comment it out
thumbsI'd probably remove key_buffer and let 5.7 use default values for it.
lawshould I be concerned with a sustained write-speed of 5000kbit/sec on a public cloud VM?
dragonheartonly if you need more than that
lawnot sustained
lawiowait is 'only' at 0.19% while I'm restoring this database dump
dragonheartwell aws has dedicated iops. other products vary.
lawhrmm, I'm trying to convert these myisam tables to innodb, but each one gives me:
law`ERROR 1067 (42000): Invalid default value for 'created_at'`
uid1law: Check sql_mode - I'd bet that zero-dates is disallowed
lawVariable_name: sql_mode Value: ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
lawis it safe/sane to remove that?
uid1NO_ZERO_IN_DATE is the likely culprit
lawis that best defined in my.cnf, or somewhere else?
uid1It can be set in a *.cnf file. Alternately you ought to be able to set @@sql_mode=[remove unwanted bit];
uid1Perhaps alter table so NULL is allowed and change '0000-00-00' to NULL
uid1I'm torn on whether zero dates are terrible or really useful
laweh, I set sql_mode in my.cnf and it's happy now :-)
lawdoes this my.cnf pass a smell-test for a VM with 32GB of RAM, SSD disks, etc? https://paste.fedoraproject.org/392701/98207014/
dragonheartdoes to me. 33-35 look a little not needed.
laweh, $boss wants them in there 'just in case' - legacy from vestigal myisam tables that were just upgraded