vaqHi, I got a timestamp like 2016-06-31 00:00:00, how do I select records from this timestamp plus 1 year back?
vaqtried where timestamp <= DATE_SUB('2016-06-31 00:00:00',INTERVAL 1 YEAR);
enlethvaq: don't you have that <= backwards?
jkavalikvaq, do you want to select the one year or anything older than one year? "timestamp <= something" are the rows older than "something"
the_lastcan you have a join table with more than 2 join columns?
jkavaliksure, but don't overcomplicate it
jkavalikjoin table is a designation, not a different type of table, but if you need another columns in there it usually stops being a "join table" from definition and becomes separate entity - but 3-way join might be a thing
vegetablesalad19I can't connect to my DB from terminal, I always get "Access denied for user 'usr'@'moapp2' (using password: YES)". I have tried with prompt password, giving password with -p****** and --password=*****. I can connect with my sql workbench and sqladmin. Is there any extra settings that I should enable for using sql with terminal ?
vegetablesalad19I'm quite sure I was able to connect with it few years ago, haven't used this server for a while. But I could be mistaken
Krisostoomushello. I want to list pricing table for all users that don't have a record in pricing table
KrisostoomusSELECT,,, pricing_others.price from zones left join pricing_others on( left join users on( where!=11 and and pricing_others.price is NULL
Krisostoomuswhat is wrong there. it lists only 1 record.
Krisostoomusbut it needs to be for all users that have pricing not set
jkavalikKrisostoomus, then select FROM users and left join the other tables?
Krisostoomusstill empty
KrisostoomusSELECT,,, pricing_others.price from users left join pricing_others on( left join tsoonid on( where!=11 and and pricing_others.price is NULL
jkavalikKrisostoomus, using a column from left joined table in the WHERE clause removes all the rows which would return NULL from the join because both "NULL != 11" and "NULL = 13" are false
Krisostoomusdo you understand btw what i try to do?
Krisostoomuswhat would be the solution?
KrisostoomusI am getting nearer
KrisostoomusSELECT,,, pricing_others.price from users left join tsoonid left join pricing_others on( where pricing_others.price is NULL and
Krisostoomusnow it gives some error
KrisostoomusSELECT,,, pricing_others.price from users left join tsoonid left join pricing_others on( where pricing_others.price is NULL
Krisostoomusif i left join users and tsoonid there seems some error
Krisostoomusseems like i can't join them so easily
Krisostoomusis it possible to left join without ON ?
jkavalikKrisostoomus, not sure what exactly you are trying, you would have to give some example data and expected results
KrisostoomusSELECT,,, pricing_others.price from users cross join tsoonid left join pricing_others on( where pricing_others.price is NULL and;
Krisostoomusthis seems to be the solution
Krisostoomuscross join
Krisostoomushello again
Krisostoomuswhat is here wrong :
KrisostoomusInsert into pricing_others (t1.userid, t1.zoneid, pricez) (SELECT 0.044 as pricez, as zoneid, as userid from users cross join tsoonid left join pricing_others on( where pricing_others.price is NULL and t1
Krisostoomuserror: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 't1' at line 2
KrisostoomusInsert into pricing_others (user_id, zone_id, price) VALUES (SELECT userid, zoneid, pricez FROM (SELECT '0.044' as pricez, as zoneid, as userid from users cross join tsoonid left join pricing_others on( where pricing_others.price is NULL and t1)
Krisostoomuswhat is wrong there?
Krisostoomusgot it working
DjKadLol again ahahahah
DjKadsome people take this channel like a sort of debugger
DjKador wall for cry
donoveKrigsforbryter: do you try each select statement individually?
jkavalikthey left before I managed to write that a cross join most probably is not what they want..
canttihi all
canttiany idea why select cast('2017-01-02' as date) - cast('2016-12-28' as date); equals 8874?
snoyesbecause subtracting dates directly does weird things.
snoyesUse instead
canttithanks snoyes
canttiforgot about that :)
donovecantti: yes, because cast('2017-01-02' as date) => 20170102 and cast('2016-12-28' as date) => 20161228 and 20170102-20161228=8874
salledonove: You don't need to cast values like '2017-01-02' as date
salledonove: They *are* date
sallecantti: ^^^
sallecantti: That was for you actually
canttithanks salle
canttiand donove
canttidonove: it isn't a bug then?
donovesalle: just explain this logic;)
donovecantti: no, it's normal
sallecantti: What donove explained is technically incorrect to be honest
sallecantti: When you CAST('2017-01-02' AS DATE) nothing really happens because the value already is properly formatted as DATE
sallecantti: It is the - operator which converts values to numbers
canttithat's nice to know salle
canttimakes much more sense
donovesalle: this syntax is correct like 1=1, or 'a'='a'
donovebut this goal is special ^^
sallecantti: The rule for converting DATE and DATETIME to numbers is to strip all non-numeric characters. In this case -s so both values become 20161228 and 20170102 respectively
donoveand isn't efficiency
sallecantti: All you need is to keep it in mind or write down somewhere that you can't use arithmetic operators directly on temporal types such as DATE, TIME, DATETIME, TIMESTAMP. There are plenty of functions to use
Krisostoomuscan someone help me?
KrisostoomusSELECT * FROM (SELECT as user_id, as zone_id, 0.044 as price1 from users cross join tsoonid) t1 LEFT JOIN pricing_others ON pricing_others.user_id=t1.user_id where pricing_others.price is NULL and t1.zone_id=13 and t1.zone_id!=11
Krisostoomuswhy this gives me 0 rows
canttisalle: I will keep that in mind, thank you very much! :)
passageKrisostoomus: because there is no data in your database that matches your query?
passageKrisostoomus: btw, if t1.zone_id=13 then t1.zone_id!=11, obviously. why include it?
dbclki'm having a problem running this -> select max(dt.created_at) as max_created_at, min(dt.created_at) as min_created_at from (select created_at from `samtt`.`mos` order by "id" desc limit 10000) as dt
dbclki'm always getting this value
dbclkwhich is the same
snoyesunless you're using the ansi_quotes, you don't want to order by "id" but rather by id or `id`
snoyesI forgive you.
salledbclk: In default mode MySQL interprets "id" as string not column id
dbclkok so I put it in ansi quotes ``
dbclkbut, my problem still persist
salledbclk: Why?
dbclkboth column values are the same
dbclki'm using min and max
salledbclk: Just write ORDER BY id as all we do
dbclkthat's what i did
dbclkthe problem still persist dudde
salledbclk: What problem?
snoyesso perhaps the last 10000 ids all have the same created_at
dbclkthis is the update
dbclki have over 10,000 records with different created_at value
salledbclk: I don't see any problem there
snoyesshow us: select created_at from `samtt`.`mos` order by `id` desc limit 10;
salledbclk: By the way you don';t need subquery for what are you doing
salleok not *this* subquery :)
dbclkhow can I accomplish then because, the filters would run after the aggegation
sallesnoyes: Perhaps also SELECT DISTINCT created_at ... ORDER BY id DESC LIMIT ...
dbclkwhy distinct?
snoyessalle: would need select distinct from (select...limit...)
dbclkwhy distinct? my intent is to find the min and max from the subquery
snoyesSELECT created_at, COUNT(*) FROM (select created_at from `samtt`.`mos` order by `id` desc limit 10000) as dt GROUP BY created_at;
salleActually SELECT COUNT(DISTINCT created_at) ....
salledbclk: Because we don't quite believe you these 10000 rows have different created_at
dbclkok and why count?
snoyesso that we can show you that those rows are all the same
snoyesif my query with that GROUP BY shows just one row with a large COUNT(*) value, than all those rows have that same value
snoyesor if salle's query with COUNT(DISTINCT) returns 1, same thing - all the same value
snoyesyou've asserted "with different created_at value", which we are testing.
gtxbbhi, is there any issue that can surface if i master-slave between 64-bit and 32-bit servers?
gtxbband thank
gtxbbACTION needs a better keyboard
snoyesgtxbb: shouldn't make a difference; replication does not care about bitedness
sallegtxbb: I can't think of any. Perhaps if you have some FLOAT, REAL or DOUBLE columns you can get some strange results, but such columns should be avoided anyway
gtxbbkewl.. needed to be sure it was documented.. thanks
passagedbclk: you've got some form of auto-updating on created_at, don't you? :-)
dbclkyep i do
daedelothanyone working with knowledge graphs?
domasonly with ignorance graphs
dbclkfrom the can I test what my storage engine is?
passagedbclk: show create table `foo`;
passagedbclk: or you can query the information_schema -
ekcolysphey guys, I sometimes see in "show engine innodb status" a line that says 'TOO MANY LOCKS PRINTED FOR THIS TRX: SUPPRESSING FURTHER PRINTS'. Is there a way to get all the locks somewhere?
domasekcolysp: maybe a table in information_schema?
domasit will show locks blocked on, not locks held though
ekcolyspI've looked. The only one I find relevant was `INNODB_LOCKS`
domasekcolysp: ;-)
ekcolyspwow and ouch :(
domasmaybe they ported it
ekcolyspjust checked, it's not there :(
ekcolyspnot even in 10.1
ekcolyspoh well. Thanks heap tho domas
domaswhy do you need to see all locks?
carpiiare there any migration tools to assist with moving to STRICT mode ?
jadewis there a way to split the contents of a field before passing it to GROUP_CONCAT?
carpiiwhats the goal?
jadewlet's say I have two rows, with "a,b" and "b,c", I'd like to get "a,b,c" as a result, not "a,b,b,c"
carpiithis is before any grouping or concat
carpiijust varchar fields
passage!t jadew about list of doom
carpiithere might be a way, but not one I can think of (that wouldnt kill performance)
carpiiyou could possibly handle it in middleare code, although its not ideal
carpiiideal way would be to refactor schema :p
jadewI can refactor the schema, but I'm not sure if it's the best solution
jadewwould make all queries have extra subqueries
jadewit's hardly a better solution
jadew(normally I need to return that whole list)
jadewI guess I could make an additional list for queries
carpiiou could maybe make a view of it and query that
carpiiso hard to offer any useful advice without really understanding all the requirements and schema tho
passagejadew: you're claiming that normalizing your table structure and using JOIN queries is a worse solution that what you currently are doing?
jadewwell, there are two modes here, 1) all the data is returned, including the CSVs
jadewand 2) all the data is returned + stats on the matched data
jadewI'd like to keep the number of queries to the minimum for the second case
jadewpassage, yes
passageok then
carpiiis the csv string supposed to represent some sort of tree hierarchy? ir its just a set of ids?
jadewset of IDs
carpiiwhat you doing with the result set after the query?
carpiiand wheres it being called from?
jadewwhich result set?
jadewthe full data?
jadewlet me pait a clearer picture
jadewI want to say "get X chunk of data, but also tell me all the available parameters for ALL the data that matched the query"
Xgcjadew: Correcting th design is a better solution. It'll tend to simplify all your logic related to this data.
Xgcjadew: But, there are also ways to solve the problem without changing the design.
carpiisolve meaning bodge, sure
jadewI don't mind changing the layout, but I fear it will add more complexity to the queries rather than reduce it
Xgcjadew: It will simplify your SQL.
jadewfor example, it will require a join, which will screw up other stats, because of the number of rows generated
carpiinot sure about simplifying the SQL as such, but it will help eliminate these horrible anomolies which SQL langague wasnt designed to solve
jadewmeaning that I'll need several queries
carpiinot necessarily
carpiiwhats calling this query? some website script, or an application or what?
jadewsome script, yeah
carpiiwhat sort of script?
carpiias in language
jadewphp, but why does that matter?
carpiimaybe it doesnt, just trying to understand your scenario :)
jadewI think I can get the normalized version to work too, but tbh, I was hoping to avoid that
Xgcjadew: Another approach is to let your application normalize the string. Let the SQL just return a,b,b,c, or whatever the mess is.
jadewI don't like long tables
carpiilong as in?
jadewcarpii, as in one field table
Xgcjadew: You don't really understand this topic well.
jadew+ foreign key
jadewXgc, that would mean to return all the values or all the rows
jadewit's a bad idea
carpiiIf i had to choose between 1 column and 50 rows, or 50 columns and 1 row
carpiiId choose rows every time
Xgcjadew: What would mean what?
jadewXgc, handling this in the application
carpiiyeah I agree, its a bad idea. Was just offering that as his 'bodge' solution
Xgcjadew: No. Just return the aggregated data, in the form you don't like. The application can then correct that string as needed.
Xgcjadew: I didn't suggest returning the raw rows, before the group operation.
jadewXgc, yeah, that's almost the same because it will return a huge string
jadewperhaps 10s of Mb in length
carpiifull of duplicate ids?
carpiiall because of this CSV thing?
jadewyes, in different configurations
Xgcjadew: Listen, this is your mess... and you don't seem to like anything.
carpiiok then its clear
carpiiyou should really redesign the schema
jadewcarpii, you're probably right
carpiinot probably :)
jadewI wonder what the performace hit will be
carpiiof what?
jadewbecause now, in mode 1, I'll have to always join and group_concat
carpiiif anything I would imagine performance will increase
passagejadew: the point of a RDBMS is related tables of data and JOIN queries
carpiibut id say performance are the least fo your problems right now
carpiisince you cant even get the result set you want
passagejadew: but you've already made up your mind
Xgcjadew: group_concat is a bad idea for other reasons, one of which is it have limitations in the width of the column it can generate.
carpiiwell, it can be reconfigured to 4G
XgcHad a similar question earlier, where the person was wondering why his comma separated list get truncated.
carpiibut yeah, not sure what the default is
carpiiactually default is 1k
jadewyeah, I don't expect it to be that lengthy
jadew(when it's handling unique values)
Xgcjadew: How many comma separated items max do you expect for one group, before eliminating duplicates?
carpiieven if you redesign your schema, you could still use GROUP_CONCAT if you really must
jadewXgc, 10 - 15
Xgcjadew: Before?
carpiibut once its in 3NF, you wont be getting duplicates in the CSV to deal with
Xgcjadew: Only 15 max even with duplicates?
jadewXgc, 15 unique values
jadewper row
Xgcjadew: That wasn't my question.
jadewthat's max tho
jadewyou mean how many duplicates I expect?
Xgcjadew: I'm trying to find out how many items, including duplicates, are in these strings, before you reduce it to a unique list.
jadewit could be 10s of thousands
XgcSounds good. When you're ready to try to solve this, just ask. There are plenty of examples of handling comma separated messes like this. I've posted several.
XgcTrust me, without a design change, they are all very ugly. Don't expect performance other than bad.
jadewyou're probably right
XgcMariaDB could be used to hide some of the ugliness.
carpiiwe all know youre not gonna fix it :)
jadewI was considering building a plugin to expose a new function to deal with that, but meh
carpiiso kinda pointless tryna help
jadewcarpii, what do you mean?
ekcolyspdomas: sorry was away on meetings. I need to see all locks, because i need to pinpoint the deadlocks we're seeing.
jadewwhy would you think that?
XgcI'm assuming he won't correct eh schema, but might ask about the ugly solutions available.
carpiijust a hunch ;)
carpiiby fix it i meant fix it properly
ekcolyspstill hazy, but I just need to make sure at least all the important locks are there, and not get truncated
jadewXgc, I knew about all the options that were given to me, including the schema change
jadewI was just wondering if there's something native that can handle that scenario
jadewparticularly since it's not an uncommon scenario
Xgcjadew: Are you sure? So you have a working query that works with your schema?
jadewXgc, I was going to build it, but I wanted to get the best out of it
carpiithe problem is you said the group concats could be enormous, so even trying to patch it up in middleware like PHP isnt feasible
Xgcjadew: I'm talking about solutions that will work with your current design. Maybe you don't know them all.
jadewI'm not gonna build something just so it works
jadewXgc, that's why I asked here
jadewcarpii, exactly
carpiiso your options would be to write a uder defined function to 'unique' the csv string in mysql perhaps, but performance will no doubt suck
Xgcjadew: I don't know which options were given to you.
carpiior you could write an extension, but thats a bit overkill
jadewcarpii, I agree it's overkill and I'm not sure it will improve performance over normalization anyway
carpiiit wont, normalisation is king in 99% of cases
carpiiuntil you intentionally denormalise, but then thats pretty specific cases
Xgcjadew: It can be done without a UDF. But it would likely perform rather badly. Anything against your current design will perform badly.
Xgcjadew: Feel free to show a quick testcase, with create table statement(s), and just a little data (inserts) for testing. That'll eliminate further guessing. Nothing fancy. It might take you a minute or so to create.
jadewXgc, nah, I'll just do what I said earlier, I'll have it both ways
jadewboth the CSV fields + normalized version
jadewthat way I'll have the best of both worlds
jadewI'm still not convinced that an extension couldn't deal with this just as fast as you would with the join
carpiiwell by extension i meant a compiled UDF
carpiias opposed to an SQL based one
jadewme too
carpiiand theyre very hard to mainatain
carpiiand really limits mySQLs abaility to use an index (if at all)
carpiiwell i mean thats true of SQL based UDF's too
carpiiso all an extension would do is make the UDF execute faster, but that wouldnt be where the performance issue lies anyway
jadewwell, splitting a string without generating copies is very fast
jadewin generating that unique group concat you'd just have to iterate over the tokens and compare them to the ones currently in the result set
carpiioh you mean an extension just to strip the duplicate IDS from the CSV string?
carpiiI was channel hopping so kinda forgot what we were talking about earlier :)
jadewyeah, basically a specialized GROUP_CONCAT
jadewI don't know how performant the join is tho, its overhead could be negligeable
jadewin which case it doesn't matter much
carpiibut you mentioned one row oculd be 10Mb? or was that the full result set ?
jadewbut otherwise the specialized GROUP_CONCAT would be very fast
carpiijoins are very performant with the right indices
jadewcarpii, that would be a GROUP_CONCAT without taking into account the fact that the field is CSV
carpiii dont know what that means
jadewso just taking uniqueness into account based on which fields have the same combination of values
carpiia single non-deduped row
carpiicould be 10Mb with all the ids in the CSV?
jadew'a|b' and 'b|c' would result in 'a|b,b|c'
carpiiand that one field could be 10Mb?
carpiiafter being concatenated
jadewso using group_concat as it is, would result in most of the fields to be concated
jadewthat's what I meant by 10 Mb
carpiiyeah i get that bit
jadewit could end up as a very long string
carpiibut for one row?
jadewno, for the entire result
jadewand that would mean a bit of processing time on the PHP side too
jadew(aside from the transfer itself)
Xgcjadew: I only asked abuot the max for one group.
carpiiwhat are the obstacles stopping you from redesigning the schema?
Xgcjadew: The entire result set is not important.
carpiifear of having to rewrite all your existing queries?
jadewXgc, yeah, and I answered about 15 values
Xgcjadew: Max before removing duplicates?
passage(he doesn't know how to do JOIN queries)
jadewcarpii, nope, there's just the one
jadewand the project is at its very begining
carpiioh man
carpiiits a no brainer then
jadewI'm just considering my options
Xgcjadew: If that's true, you don't have a problem with the design change.
jadewI don't
carpiijadew, if you make this leap now, you will learn stuff that helps you for the rest of your career
carpiibut if you bodge it, youre a bum :)
XgcThat's not what you said above. Don't make a mess intentionally.
jadewcarpii, I don't know why you guys assume I don't know have a clue :)
carpiijade well.... i guess we cant understand the reluctance to do things which have been systematically proven to be better
jadewin reality, storing values that way is a very intuitive way of doing it
jadewthe fact that the db engine doesn't deal nice with it, is a shortcoming of the engine
XgcContemplation of csv, with the requirement of haing to process components of the csv dynamically in SQL, is a sign you might not know much at all about SQL.
carpiipeople have written endless Phd papers on this stuff
jadewcarpii, that's the thing, I wasn't sure it's better for this particular case, specifically because it has two very different usage scenarios
carpiijadew, maybe MySQL could provide more convenice features to cater for this, I agree
jadewand I'm trying to get away with as little duplicate data as possible
thumbsif you're new to relational databases, that might seem like an "intuitive" approach, I suppose.
jadewalso with the best possible solution
jadewif I were to accept the first advice I ever got, I wouldn't get too far with anything
Xgcand if you do know what you're talking about with respect to SQL and still doing this intentionally, you wouldn't be asking questions here.
carpiijadew, i agree. but Ive been using databases for 20 years now,been an SQL Server DBA and run high traffic sites on MySQL these days
jadewwhere else could I hope to find information that I don't have?
carpiithere is a reason we advise you the way we do
carpiiand its not just so we can have a pop at you
jadewcarpii, I've been designing high traffic systems myself for more than a decade
carpiiok thats great, but database design seems to be your weak point :)
jadewthat doesn't mean I don't have new questions
carpiino shame in that..
Xgcjadew: You keep saying you have it all, you know what you are doing, you've considered the options, you know everything else is bad and you are going forward with your current approach.
carpiithe shame is only when you refuse to learn
jadewXgc, I already knew my options, I didn't like any of them
jadewthat's why I came here
passage(this might be the President-elect)
jadewI hoped I will be given an option I'd like
jadewsomething I didn't know
carpiiyou oculd have rerwitten your database in the time weve been discussing this
passagejadew: learn normalization, learn JOIN queries
jadewI already did
carpii3NF ? :p
carpiiok, good on you
jadewbut I kept the old stuff too
carpiiACTION snaps pencil
carpiiheh :)
jadewit's almost never written to
jadewit's only generated once and then read from
jadewit makes sense
XgcSo you do like the options. You just want to keep the denormalized form. That's a separate issue.
jadewXgc, it's the least bad option, can't say I like it
Xgcjadew: That's life.
jadewwhat I would have liked was a way to do it directly :)
Xgcjadew: We always look for the least bad option.
jadewperhaps using a different kind of field
XgcThat's called the best option.
XgcI bet your best option is to win the lottery and go on an extended vacation until you die.
jadewI think that's everyone's best option
jadewwhat matters then is how you spend the vacation
passageACTION reaches for the IGNORE switch after 60+ minutes of this drivel
carpiihopefully you wont be spending it pissing about with MySQL
jadewpretty sure I would haha
carpiiyeah i probably would too :(
jnewti'm trying to recreate a database on windows for testing, but i keep getting an error SQL Error (1286): Unknown storage engine 'XtraDB' (trying to re-create on mariaDB 10.1, original was 10.0.2).
jnewtany idea why?
Xgcjnewt: Go to the original server, connect and show the result of this: SELECT version();
Xgcjnewt: There's a #maria channel
thumbsshow engines; might help
XgcOn both systems.
XgcXtraDB doesn't appear to be enabled by default, at least not in 10.2.2
XgcOh. It identifies as InnoDB in MariaDB, apparently.
carpiisame for percona
jnewton working setup: 10.0.28-MariaDB-0ubuntu0.16.04.1 , on non-working: 10.1.19-MariaDB
jadewjnewt, so replace the engine name in your create statements with InnoDB
jadewisn't XtraDB what's used when you set the engine to InnoDB in Maria?
jnewtjadew: so XtraDB is good for 10.0.28 but not 10.1.19?
jadewcould be a bug in show create table
jnewti thought that XtraDB replaced InnoDB
carpiii dont know if they can even distribute InnoDB anymore, so MariaB and percona are using XtraDB anyway
carpiibut masquearding as InnoDB to allow them to be drop in replacements
domasisn't percona on top of regular mysql/innodb?
carpiithey fork it I think, but XtraDB is actually Percona's own engine afaik
carpiijust I guess they let mariaDB use it too since its open source
carpiibut regular InnoDB isnt shipped with Percona
XgcFrom MarisDB docs: Percona XtraDB is an enhanced version of the InnoDB storage engine ... Percona XtraDB versions in MariaDB ... XtraDB from Percona Server 5.6.34-79.1 in MariaDB 10.1.20
jnewtXgc: so it should work in my 10.1?
Xgcjnewt: Seems so.
jnewtyet i get that error.
Xgcjnewt: Ask in #maria
Xgcjnewt: Check with the Maria folks. It might require configuration adjustments or a plug-in.