robertparkerxam I still muted?
robertparkerxThank you thumbs
thmsWhat is Mutex contention ? Why does mysqltuner tell me to disable the query_cache due to mutex contention ?
jkavalikthms, query cache was implemented long time ago when having two processors was mostly a luxury.. and it was not rewritten since afaik. So each time a query comes in, the query cache locks all threads on a mutex and checks if it can serve the results from the cache.. similarly with all other operations. The effect is that when you have servers with 16-32 cpu cores, those may spend more time waiting for each other to check the cache than
jkavalik they would actually spend by executing the query - the cache can be slowing you down instead of its intended effect
tunageI had to rescue a database off a dead server. now innodb will not initialize, nor read the database. I did delete the ib_log files https://bpaste.net/show/fdf7d5f35f7d
jkavaliktunage, first thing - keep the original datadir as a backup at all times
jkavaliktunage, "[ERROR] InnoDB: ./ibdata1 can't be opened in read-write mode" - did you set rights properly?
thmsjkavalik, the server doesn't have much cores but mysqltuner tells me to do this due to mutex contention, it's wordpress
tunagejkavalik dang, it was just permissions. thank you!
jkavalikthms, mysqltuner used to propagate the query cache and now it shames it the big time :) as with all its "suggestions" you should either know or benchmark the effect of the change before committing to it, if you can asses the performance of your site with qc enabled and disabled, you will know what is better for you
thmsSure I can, how do you "assess perf" though ? Just load time of page jkavalik ?
jkavalikthms, that is the hard part.. it depends on what your site does, if there is some "application" running or just pageviews. For wordpress site probably load time would be a good indicator, when you have some nontrivial but realistic number of views per minute
thmsjkavalik, it indeed is a wordpress, thank you for yout mle
jkavalikthms, another "metrics" are the max pages/second when you throw really many clients at it, or cpu/hdd/memory utilization per some realistic number of pageviews
thmsjkavalik, it's for the backend only
thmsload will do.
thmsfrontend is fast but there's some caching
jkavalikthms, just remember, to really get rid of the qc mutex, it has to be disabled in config and the server restarted (and the qc cannot then be enabled without another restart iirc), if you just turn the qc off on the fly, some codepaths stay enabled which still check the mutex and can slow things down a bit
thmsjkavalik, Thanks you :)
thmsI don't have the password
thmshow I can run the OPIMIZE TABLES on all tables
thmsall solutions I've see use some bash password.
thmsI connectg with mysql --defaults-file=/etc/mysql/debian.cn
jkavalikthms, don't run optimize table on all tables
thmsI heard it cannot do harm
jkavalikwell, it locks the tables for some time, so it can temporarily block your site for one
jkavalikare your tables innodb or myisam? if innodb then there is no need for optimize, with myisam there might be some effect for fragmented tables maybe
thmsjkavalik, innodb I think?
thmsjkavalik, I don't care if there is some downtime it's a tiny table it wont be long.
jkavalikthms, well, if it is a small innodb table then there is really not need to run optimize probably, mysqltuner wont stop reporting it, because it reports "fragmented" tables and it just reports most innodb tables as fragmented because they have a different architecture from myisam tables
thmsjkavalik, alright
Bishwhat is a clever way to check if there is a row of a table, which has a foreign key-connected row of another table
Bishjust the existence
Bishin a single select
UlrarHi, I have a virtual machine with a MySQL on it, and it seems like a request is getting stuck on "statistics" over and over, enough that it makes the load of the machine go very very high every hour, we had to put a cron to restart mysql so that it won't crash
UlrarIs there a way to know why it's stuck on statistics ?
BishUlrar: what does mysqltuner say?
Bishlog slow queries, maybe?
Ulrar[!!] Temporary tables created on disk: 45% (3K on disk / 7K total)
Ulrarthat's the only red
NaktibaldaBish: select with join
BishNaktibalda: yeah i figured, but i want like a "binary field" which says there is data "connected"
Bishhow do i do that
NaktibaldaSELECT COUNT(connected_table.id)>0 AS has_data
Bishreally? :o *tests*
Bishbut isn't that sub query pretty overhead
jkavalikUlrar, is you buffer pool size set properly? any disk performance problems?
NaktibaldaI didn't tell you to use usbquery
Ulrarjkavalik: The pool size is 2Gb, according to mysqltuner it's using 48 mb. For disk performances they aren't great, the VM disks are on glusterfs
UlrarBut we have other clients on the same infrastructure working fine
jkavalikUlrar, some complex queries where optimizer might try to take too many possible indexes into account?
UlrarIt's always the same query, and it does indeed look a bit complex with some chained INNER JOINS
BishNaktibalda: well, but what you do there is 2 selects in query
Naktibaldajust add that count to your select
UlrarIt's so long the mysql-slow is saying "...skipping..." in the middle of the query
Naktibaldaand GROUP BY this_table.id
Bishselect *,count(y.x_id)>0 has_data from x,y where x.id = y.x_id <= is what you suggested
Bishisn't it?
shiblyHow many rows can be inserted in a table?
shiblyMaximum rows limit
Bishshibly: well unlimited, basicially
Bishdepends on hw
shiblyNo, it can't be unlimited.
Bishwhy?
shiblyAre you sure?
Bishwell.. unlimited is a stupid answer.. since every machine will die before that
shiblyWhat can be the maximum size of a mysql database?
Bishbut practially ,unlimited
shiblyNo
Bishshibly: im guessing, it will be the maximum size the filesystem allows
Bishwhich is pretty much, considering filesystems using 128bit addressing
jkavalikshibly, there are some limits per table/partition size etc, but the actual number of rows will depend on row size anyway
shiblyjkavalik, What's the row size?
jkavalikshibly, innodb partition seems to be limited to 64TB and myisam table few times more with the right settings
jkavalikshibly, number and types of columns etc.. a row with two numbers and a rows with 20 datetimes will take quite different space
shiblyjkavalik, 64TB for a database?
jkavalikshibly, 64TB per tablespace, 32TB per table data, but you can partition it to overcome that limit http://mysql.rjweb.org/doc.php/limits
shiblyIf a table reached it's maximum row limit, then what should i do?
shiblyThe table is full
shiblyWhat's the maximum row limit?
jkavalikshibly, again, there is NO maximum row limit
Naktibalda!m shibly full table
ubiquity_botshibly: See http://dev.mysql.com/doc/refman/5.6/en/full-table.html
jkavalikshibly, if you get "table full" then there will be some reason, but it does not mean hitting some internal limit
Naktibaldathat page used to have more information
Naktibaldaread the page that it links to - http://dev.mysql.com/doc/refman/5.6/en/table-size-limit.html
shiblySuppose it's a php-mysql website, new record is inserted into table, it reached the limit, can't insert more, then what should i do?
shiblyShould i create another table and change the table name in php script?
jkavalikbuy a bigger disk probably
shiblySuppose that bigger disk is full and reached it's limit, then what to do?
deepyBuy another one
deepyshibly: do you have an actual problem or are you trying to solve something you think will be a problem_
jkavalikmake a raid array of them and continue
shiblyIf i buy another disk, then would i have to change the database name and table name in php script?
shiblyDo i have to create new database and table in that new disk?
deepyshibly: how many millions of rows do you have?
jkavalikif you are planning for a 30TB+ table then you might want to rethink everything, because working with so many data will be probably PITA long before it hits any "limit"
shiblyI'm thinking how does facebook manage such huge amount of data?
jkavalikshibly, you can just migrate the filesystem, or extend it, depending on what os and filesystem you use, if you have a raid already, then you just add the disk and inflate the volumes probably.. too theoretical, but mots of these do not mean creating a new database/table
deepyshibly: you're not facebook, unless you have facebook amount of data you're not going to get anythingo ut of looking at facebook
jkavalikcarefully and with lot of experience
deepyAnd you don't have facebook data
kskhola. I tried putting a single Database on a ssd partition using a symlink in mysql datadir - however If I want to drop that database mysql runs into an error because it tries to "rmdir" that symlink - is there any workaround? thanks!
_August_what datatype should i use for a list of 20+ keywords?
Naktibalda_August_: you should use another table
shiblyhttp://dev.mysql.com/doc/refman/5.0/en/column-count-limit.html
shibly Every table (regardless of storage engine) has a maximum row size of 65,535 bytes. Storage engines may place additional constraints on this limit, reducing the effective maximum row size.
Naktibaldathis is a limit for a single row
Naktibaldaand you can get around it by using BLOBS
_August_blobs huh?
_August_the max amount of keywords might be 50 different words
shiblyI can't understand
shiblyHow many rows can be inserted in a table?
_August_this is what im trying to do
_August_i have a profile for a user, the user has different columns of for whatever information im telling them to pass thru it
_August_I then have one a column with just keywords
_August_im trying to figure out what type to give the column of 50 different keywords
Naktibaldashibly: have you read http://dev.mysql.com/doc/refman/5.6/en/table-size-limit.html ?
Naktibalda!t _August_ list of doom
ubiquity_bot_August_: http://gtowey.blogspot.com/2009/12/how-to-fix-comma-separated-list-of-doom.html
Naktibalda_August_: create a separate user_keywords table (user_id, keyword) add as many rows to it as you like
deepyshibly: how many rows do you have? we're doing billions of rows and nowhere near any limits
_August_wtf
_August_seriously?
shiblydeepy, Billions of rows? And not near the limits?
jkavalikshibly, once again, there is no specific limit for the NUMBER of ROWS, there are different limits for table size etc, and each table has different row size depending on its structure, you can have a small table with millions of rows, and you can have a big table under million, containing really long rows (text/blob fields etc)
deepyshibly: it's a bit of a specific database, most of ours are only in the range of 30-80 millions of rows
jkavalik_August_, seriously, having a separate table where each keyword will have its own row will be much simpler and easier to work with in the long run
_August_isnt it going to take up more space, size wise?
shiblyjkavalik, What's the maximum size of a table?
Naktibalda_August_: it will save you a lot of CPU
Naktibaldayour queries will be much faster
shiblyNaktibalda, http://dev.mysql.com/doc/refman/5.6/en/table-size-limit.html Linux 2.4+ (using ext3 file system) 4TB , it's ext4 filesystem
shiblyNot ext3
Naktibaldaso it doesn't apply
Naktibaldawhat? you need more?
_August_ok well now i need to figure out how to write this to the db =/
Naktibalda1 INSERT per keyword in a loop, or this: INSERT INTO user_keywords (user_id, keyword) VALUES (1, 'foo'), (1, 'bar'), (1, 'baz');
shiblyIs it possible to know the size of my current database?
Naktibaldadatabase or table?
shiblydatabase
shiblyhttp://stackoverflow.com/questions/14714750/how-to-get-true-size-of-mysql-database , is this the answer?
jkavalikshibly, depends on why do you want to know.. for binary backups datadir filesystem size is more important, for mysqldump backup the actual dump size (possibly after gzip etc) is more interesting.. these queries are just for some approximation numbers
shiblyjkavalik, Any answer?
jwhyes but you timed out :)
jwh[11:20:55] < jkavalik> shibly, depends on why do you want to know.. for binary backups datadir filesystem size is more important, for mysqldump backup the actual dump size (possibly after gzip etc) is more interesting.. these queries are just for some approximation numbers
shiblyjwh, That's not the answer.
jwhyes it is
jwhalso, who cares about dataset size
jwhstorage is cheap
shiblyIf you buy new disk, how will you add this to old partition?
jwhby structuring your underlying storage properly
shiblyWhat does that mean?
jwhexactly what it says
shiblyThe old partition is in /dev/sda , new hard disk partition is in /dev/sdb
shiblyHow can you add space from /dev/sdb to /dev/sda1 or so?
shiblyACTION going outside.
berdarioHi, has anyone tried to use Ubuntu 16.04 here?
berdarioI plan to report a couple of issues on their launchpad
berdariobut in the meanwhile I wanted to find a workaroujnd
berdario(basically, mysql5.7 overwrites the current empty password that is used on the developers workstations, and mysqld is unable to read a init_file to restore the credentials)
salleberdario: Not related to Ubunutu
berdariosalle: what do you mean?
berdarioI just checked on a colleague machine
berdarioon 16.04, `sudo -u mysql mysqld` shuts down after a few seconds
berdarioon 14.04, `sudo -u mysql mysqld` works as expected
salleberdario: You should never start mysqld this way
berdariothis makes it impossible to restore the password
berdariosalle: please don't assume things
berdariohttp://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html
berdariothe problem is that Ubuntu's mysql5.7 will overwrite root's password
berdarioand will not drop a .mysql_secret file anywhere on the disk
berdario(I ran `find / -name .mysql_secret` to verify that)
berdarioso, rather than wiping the db, and reinstalling mysql-server
MACscr1why would i get table already exists when I do an sql import from the cli when i know for a fact that the sql has "DROP TABLE IF EXISTS" right before the table creation?
salleberdario: I don't assume anything. mysqld must be never started directly as with your `sudo -u mysql mysqld`
berdariothe simplest approach would be to reset the permissions
berdariosalle: please read the manual
salleberdario: I am reading it since late 1990s
berdariosalle: that's the only way to reset permissions... either you start mysqld_safe with --init-file, or you run mysqld_safe with --skip-grant-tables
salleberdario: mysqld_safe != starting mysqld with `sudo -u mysql mysqld`
berdarioI'm just running mysqld in my examples to rule out potential bugs in the mysqld_safe script
berdariosince mysqld_safe doesn't work
salleMACscr: It could be InnoDB table which you screwed by touching some files in datadir
salleberdario: "doesn't work" is never helpful statement
MACscrsalle: grr, i hate innodb. such a pain to recover
berdariosalle: that's why I asked if there's anyone here that tried mysql on Ubuntu 16.04, it would save me a lot of time of explaining things
MACscrsalle: ok, so what should i wipe to start the db from scratch so i can restore it?
berdariosalle: sorry if the mysqld_safe/mysqld was unclear
salleMACscr: If you want to "wipe out" all the databases then simply remove everything in datadir and initialize the mysql database with either mysql_install_db or 5.7 specific mysqld --initialize
MACscrno, just this db
MACscrand im using 5.6
salleMACscr: DROP DATABASE
salleMACscr: If you tried to remove the database directory while keeping ibdata file it is obvious why you can't DROP table which does not exist in the database dir and you can't CREATE it either ;)
MACscrbut i dont want to recreate the db, just the tables. if i drop the db, dont i have to recreate it and then redo all the user assignments, etc, for it?
berdariosalle: the problem is that mysqld_safe will shut down on its own after a few seconds
berdariohttps://gist.github.com/anonymous/f100c90593cbcc69500cdc8360aa90ae
berdariothis is the only output
berdarioand this is the strace of running mysqld_safe
berdariohttps://gist.github.com/5132ebfbd99c7e022b74f2018d07f573
MACscrsalle: wouldnt simply stopping mysql, rm -f /var/lib/msyql/dbname be good enough?
salleMACscr: It is bad idea even with mysqld stopped
salleMACscr: Don't do it
salleMACscr: If you already did it prepare for interesting times ahead
jwhberdario: where is ubuntu overwriting the password, in the init script?
MACscrwhy? a db dump restore should restore everything it needs
jwhexcept in cases where it will error because it can't load db files it thinks should exist
berdariojwh: I think it's in the installation/configuration script
salleMACscr: Because InnoDB maintains internal data dictionary so it sees the tables which are not on the disk and will not allow you to overwrite it
salleMACscr: s/it/them
MACscrso how do i wipe the internal directory for innodb in relation to this specific db?
jwhyou can't, it is interleaved in the log file
salleMACscr: With DROP DATABASE while the database still exists :)
jkavaliksalle, when I see a connection in processlist (id = 4, sleeptime some 18000s), is there some easy way to find the app holding that connection?
MACscrinnodb is shite. not even close as resilient as myisam
jwhwhat
berdariojwh: if you run `apt install mysql-server` with mysql-server-5.6 installed, it will uninstall 5.6, install 5.7 and migrate the db... the db could be accessed with an empty password earlier (I don't like it, but that's how our local development machines are set up)
berdariobut after installing 5.7 they cannot
jwhberdario: mysql 5.7 defaults to a random password anyway, better get used to it :P
jwh(I agree it sucks)
berdariojwh: that's the problem
jwhbut.. the password should be in the error/general log on first start
berdariojwh: ubuntu doesn't leave the .mysql_secret file anywhere on disk
sallejkavalik: Hmm. I can't think of such way
berdariojwh: I also grepped the logs for the 'temporary password' string, but I couldn't find anything... do you have any hints at what I should be looking for exactly?
jwhberdario: well, in that case, if you really want to avoid password, stop mysqld, remove datadir, start manually with --initialize-insecure
sallejkavalik: Brute force => KILL 4; and see which app complains, but I am sure you know that
jwhinstead
jkavaliksalle, me neither, but tried asking :) might be possible to at least list all open sockets..
berdariojwh: it doesn't start manually :/ it shuts down on its own after a few seconds :/
jkavaliksalle, yep, was my first response to the coworker who's asking :)
jwhberdario: it won't start without datadir being intact, you need to initialize first
salleberdario: if it is fresh install mysqld --initialize-insecure will help
jwhmysql_install_db or whatever isn't used anymore
berdarioyes, but if it's a fresh install I could select a non-random password, login and then switch the password back to the empty one
jwhyes, hence stop and remove the datadir
jwhand initialize again
jwhit isn't rocket science :P
berdarioI'd like to avoid having to force my colleagues delete the whole database and then reimporting it upon update
jwhright, so what about just starting with --skip-grant-tables and fixing the user?
berdariojwh: it doesn't start with --skip-grant-tables
jwhhave you posted any logs?
berdarioit shuts down on its own after a few seconds
berdarioyes
jwhok one sec
berdariothe logs before are with --init-file
sallejwh: mysql_install_db is still in 5.7, just not used by default
berdarioI'll post new logs with --skip-grant-tables
berdario(it was just the strace output... )
jwhsalle: yeah, I meant deprecated
salleberdario: Doesn't ubunutu use systemd?
jwh2016-04-18T11:54:25.704749Z mysqld_safe Logging to '/var/log/mysql/error.log'.
jwhso what are the contents of error.log? :)
berdariosalle: it does
jwhsalle: horribly
salleberdario: Perhaps that's why it doesn't like mysqld_safe
jwhsalle: same as debian, half systemd with script wrappers around it, totally horrible
jwheither go systemd or don't, honestly
sallejwh: I hate conspiracy theories, but systemd really looks like well paid project to kill Linux
jwhhaha
jwhI think what is relevant here is that quote about not attributing to malice what you can attribute to incompetence
jwhor whatever it is
jwhlennart is a nutter
jwhhalf-baked, broken, just like pulseaudio ;)
berdariojwh: sorry, took a while https://gist.github.com/anonymous/b67a523af74d31859b14c0e763eb437c
berdariothe error with bash is slightly different
berdariohttps://gist.github.com/anonymous/204a916445c8e413fe366d37cd1b170e
jwhthe error is quite clear
jkavalikberdario, "[ERROR] /usr/sbin/mysqld: File '/tmp/.psub.7lbYNVWIMi' not found (Errcode: 13 - Permission denied)" sounds relevant
berdariofish creates the pipe in /tmp, bash creates it in /dev/fd
berdarioyup
berdariothat's nonsense
jwhfix permissions or tmpdir
berdariosince the file is O+R
jwhrun namei on it
berdarioin fact, you'll get the same error with any file
jwhnamei -mo /tmp/.psub.7lbYNVWIMi
berdariojwh: https://gist.github.com/anonymous/a8c127c554c6c408437bcbdce7ec5a5d
jwhthat wasn't what I said to run
berdariojwh: this is with `-mo` https://gist.github.com/anonymous/2569e19369e4ea0850600b9577ed11a7
jkavalikberdario, isn't this the systemd super feature with "private" temp dirs per app?
berdariojkavalik: that's interesting... I never heard of that feature
jwhknowing systemd, it could be anything batshit like that actually
jkavalikhttps://fedoraproject.org/wiki/Features/ServicesPrivateTmp
jwhberdario: chuck it in datadir with mysql:mysql
jwh:P
berdarioit doesn't work if I put it in my ~ either... I'll try to put it in datadir :)
berdariodatadir should be /var/lib/mysql by default, right?
jwhprobably
berdario(it is, according to /etc/mysql/mysql.conf.d/mysqld.cnf )
berdariothe process is still running
berdarioseems promising
berdariobut the password hasn't been reset yet
berdariohttps://gist.github.com/anonymous/acdb272e91b5bfc7c9ea8a4db2baa819
berdarioI wonder if, rather than systemd, the fault might not lie with apparmor
jwhisn't the rule with aa/selinux to remove that nonsense immediately after install?
berdarioyeah, it's a bit unfortunate that they end up always getting removed...
jwhheh
berdariowell, at least I got an almost-solution
berdariothanks for the help in debugging this
berdarioI'd add a comment about mysql being sandbox and the consequences for launchit it with --init-file here: http://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html
berdario(I mean, I'd add it in the main page... but I'll add a comment at the bottom, since that's all that I can do)
jwhmight be worth finding out which distributions have private tmp dirs enabled