nafis it possible to store the entirety of a blog post in a single mysql entry?
dragonheart!m naf data types
ubiquity_botnaf: See
nafand how can i define that when making a new table entry
nafokay thakns
nafis it possible to make a column with a bigger size potential than 65,535 bytes?
antiPoPindex is added, took 16 hours
antiPoPthe query gave 1 warning, how can I see it?
dragonheartshow warnings;
phil28I have a weird question, not sure this exists
phil28is it possible to setup mysql so that when a change is made to a specific table (like an insert), mysql automatically populates another table with data from this insert?
phil28but manipulates the data
dragonheart!m phil28 triggers
ubiquity_botphil28: See
dragonheartif you can avoid that by normalizing data its generally better than triggers!!!!!
phil28yeah, but here's the kicker
phil28I'm using Opencart for an ecommerce
phil28with Journal theme ontop
phil28I'd rather not mess with the files to be able to handle updates more easily
phil28or maybe vqmod...hmm
phil28or maybe I can pull the data out in the way that I want with a simple sql query
dragonheart!t phil28 joins
ubiquity_botphil28: Very basic join information: - For more indepth info:'s in the same table
phil28problem is this
phil28Let's say I'm selling tshirts
phil28selling tshirts in packs of 4
phil28you can choose 4 colors, 1 size and 1 style
phil28opencart stores all these optoins in the same table
phil28but I want to be able to pull out a table of the sort: Color | Size | Style
phil28so that I can have a view of the t-shirts I need to prepare for the orders
phil28getting the color in the table is not complicated, but adding the style and size to that same row seems to be
ne1is the channel to ask about database theory?
uid1In theory it might produce an answer, in practice it might not.
uid1(joke) Ask, see if we can help, ne1
nafdoes anyone knowledgable see anything immediately wrong with this?
nafif($mysqli->query("INSERT INTO `blogposts` (`title`, `content`) VALUES ('$title', '$content');"))
nafam i allowed to put white space into a query?
nafinbetween quotes
ne1im trying to figure out how to check if decomposition preserves functional dependency
ne1one example, if FD is (AB->C, C->A, C->D) and R1(ACD) R2(BC) it does not preserve functional dependency because there is no way to get AB->C
phil28dragonheart..I think you are correct...can I join the same table? If so I'll be able to get my result :)
uid1phil28: Yes, you can join a table to itself.
jkavalik!t naf about sql injection
ubiquity_botnaf: Please read and and
phil28yup..looking it up..just gotta figure out how to properly format the result since there will be 2 joins in the query
phil28holy's working!! lol
phil28god this shit is powerfull :)
TeCHLoGy[How] can send an email to mysql databse? i want to send all attachments too (to an database that collects registered users data)
TeCHLoGy[-->] i have some solutions for win... but i want it for linux..
orlockTeCHLoGy: Not terribly difficult to write, but maybe the wrong approach
jkavalikthat has nothing to do with #mysql actually (that means - there is no direct way to do it using SQL), you need some app (buy or write) for that
orlockTeCHLoGy: Store the emails locally and refer to the filenames
lamnethTeCHLogy: since you can "email to ODBC", can't you use that on Linux?
lamnethTeCHLogy: and why would you programmatically "email to a database" when you can connect to it and insert the data directly?
lamnethTeCHLogy: I guess the proper question is what are you try to do and why this way?
jkavalik"email to ODBC" is THE software feature of theirs, the application code to do it is what they sell
lamnethjkavalik: yep, understood that.
jkavaliklamneth, was trying to stress that a bit for TeCHLoGy, sorry :)
TeCHLoGycan i use an email to ODBC and then an ODBC to mysql?
bhuddahodbc is just an interface definition (basically)
lamnethTeCHLogy: What are you trying to do exactly? This kind of "patching" doesn't seem like the simplest way!
TeCHLoGyACTION says nothing. this is only an different question for using mysql.
orlock.. NSA is really scraping the bottom of the barrel these days
orlockThere's gotta be a better way to store everybodys email then that
TeCHLoGyNSA cant spy on the minds :p
biaxwhat do you guys use to manage your mysql databases -remotely- ?
TeCHLoGyon windows?
TeCHLoGyNavicat premium (i have some cracked)
biaxalso, is tmpfs for tmpdir necessary? even when my vps is on ssd
biaxwell, recommended, not necessary
biaxhaha its fine, i am excellent at finding cracked software too
biaxany reason why you wont recommend opensource solutions?
biaxi searched open source alternatives and found heidisql and a mysql workbench
TeCHLoGyit is an best collection. use this
jkavalikbiax, I would say tmpfs (or generally RAM based storage) is NOT recommended for MySQL tmp dir - the temporary tables can get big because of some bad query and when you run out of temp space the server may stop until you free some space
stefuNzhey ... hope you can help me?
p4trixUsecase: 2 applications on different server. I need to access the database 1 on server 1 from server 2. I could ssh-tunnel. But it's meant to be like that continuisly. So I don't think its the best approach. Any thoughts on that?
p4trixI know there is a way of setting up a public host for the mysql db. Would that be a better way? How do you call that?
jkavalikp4trix, remote connection?
jkavalik!t p4trix about remote
ubiquity_botp4trix: remove bind-address= and skip-networking from my.cnf and grant permission to the external 'user'@'host' and remove any firewall rules blocking port 3306 and make sure no overrides on the mysqld commandline. See
jkavalikare the servers on the same (sub)network?
jkavalikthen best would be co configure a VPN between them imho, that is a bit more permanent than ssh tunnels but is equivalent in security and availability
p4trixjkavalik, Thank you for the feedback, but do you thing that just granting the external user access isn't enough? Why an additional VPN? To me it seems a bit overengenierd at first sight. I just need to grab some values on regular basis and those arent big projects.
p4trixdo you think*
Bent0I'm trying to select rows where the time difference between the previous row is more than 2 mins. The field is a datetime field. Any idea how to proceed?
p4trixprevious row is which? id -1?
p4trixBent0, select * from table where not exists(select from table where <your time limits>)
p4trixDoes that makes sense?
Bent0yeah ill give it a shot
emrHello, how i can create a memory table which is mirror of existing (harddisk) table, is it possible?
lamnethemr: CREATE TABLE t (col1 INT(11), col2 INT(11)) ENGINE=MEMORY AS SELECT * FROM another_t
lamnethemr: make sure you have enough memory by setting max_heap_table_size appropriately
emrlamneth, but what about updates, deletes actually how do i sync them?
lamnethemr: programmatically or with events and/or triggers
emrlamneth, alright thanks a lot
anasuleIs there a better way to do the following i have a database of parts and some parts use other parts as sub parts. When a new part is created i take each subpart and do a for loop inserting the sub parts as parts to the database. Sometimes they already exist so i get a duplicate key entry, but it seems its that or do a select to see if its there and then an insert to add it. Whats more efficient a select to check its existance and then c
p4trixINSERT IGNORE INTO <table> ?
p4trixAlso you should check you db normalisation. Doesn't sound too good at first sight :P
magentarhi! I have a query that I run for a table that contains a longblob column. running the first query on it after a while takes a very long time (over 20 seconds) but each subsequent query runs quite fast. explain also shows that the query only reads 2000 rows, so it's hard to find the problem. I think mysql "opening" the table takes such a long time, do you think it's caused by the longblob column? the table contains 91k rows and weigh
magentarthe query can be something as simple as select count(*) from email;
anasulep4trix: INSERT IGNORE INTO is working perfect thankyou
magentarto answer my own question, i seems like innodb is not optimized for large binary columns:
jkavalikp4trix, sorry for a very late response, was a bit busy.. with ssh tunnel or vpn you can limit the access from outside to the server to only allowed computers, without it you have to make the server port entirely public, allowing someone from internet trying to try to connect, using dictionary, bruteforce etc
p4trixjkavalik, Thank you. Makes sense.
shodan`Is it possible to enable some sort of verbose logging on MySQLd? We've been having some connection issues for a while now.
shodan`>setupconnection: failure; Lost connection to MySQL server at reading authorization packet, system error: 2
shodan`(this is not a MySQL error but still)
XL!n shodan log_warnings
ubiquity_botXL: Sorry, I have no idea about that function.
jikzhi guys.
jikzi am using Server version: 5.6.30 MySQL Community Server (GPL)
jikzfor somereason i am changing sql-mode variable using my.cnf but not reflctin..
jikzis there anything i need to do apart from adding and restart mysql /?
jikzusing centos 7
jkavalikjikz, are you adding it to the right section? [mysqld] is the one
jikzjkavalik, yes i am
jkavalik!t jikz which my.cnf
ubiquity_botjikz: mysqld --help --verbose | grep -C2 my.cnf :: Long explanation:
jkavalik To clear the SQL mode explicitly, set it to an empty string using --sql-mode="" on the command line, or sql-mode="" in an option file. -
jikzjkavalik, hmm.. i did check the files.. and they are all empy..
jikzexcept for ~/.my.cnf
jikzthat has only my password under [client] block
jikzthere is no [mysqld] block defined
jkavalik~/.my.cnf should be only for client apps as your user is not the one running the service
jkavalikjikz, so what file did you modify?
jikzjkavalik, yes.. i just did it.. added sql-mode="" in /etc/my.cnf
jikzfor some reason it is not taking it..
jikzwhen i login as root (localhost) i still get | sql_mode | STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION
jikzany idea jkavalik
jkavalikjikz, are you connecting to the right instance? are other values set in that my.cnf reflected?
jkavalikdid you check the error log to see any warnings about mode?
jikzyes.. i am on the server itself.. where mysqld is running.. i am checking for other values if thye are reflected..
jikzjkavalik, yes i did.. i just changed the default engine from myisam to innodb and restarted..
jikzit worked..
jikzreverted it back as well.
jikzsql-mode="" is just under it..
jikzin the same block.
jikzjkavalik, just figured it out..
jikzthere is /usr/my.cnf
jikzits a cpanel server and i don't know why /usr/my.cnf is used..
jikzjkavalik, do you have any idea how is this decided..?? is it a daemon setting hardcoded?
jikzjkavalik, okay so its oracle who did it.. blamed cpanel :D
Dead_Officeis it possible to sort the data in the database based on a unique id number
thumbsDead_Office: see the ORDER BY clause.
Dead_Officethumbs, Lovely . lol I completely forgot that one can always sort the output
Dead_Officethumbs, Thanks
Bent0I have this table where a timestamp column has ON UPDATE CURRENT_TIMESTAMP. I want to insert a bunch of values which already have a timestamp but when I do all the timestamp values are replaced with the timestamp of the moment of insertion. Any way to circumvent this?
f3ewBent0: alter table to remove the on update, add your rows, restore the on update
Bent0Not a better way to do this? There will probably be insertions while im doing this
f3ewMySQL doesn't support transactional DDL like PostgreSQL
BlackPanxLock wait timeout exceeded; try restarting transaction
BlackPanxanyone can point me to parameter that handles this lock wait timeout ?
BlackPanxcan "Lock wait timeout exceeded; try restarting transaction" happen because of slow disk? or somehting like that ?
jkavalikBlackPanx it happens because "your" transaction is trying to update something locked by some other transaction and the other transaction is still running (it is very different from a deadlock because the other transaction is running OK, just not finishing for some reason)
BlackPanxthere was no other transaction.
BlackPanxit was the only query
BlackPanxit's clean database
BlackPanxno connection besides one alter and my connection with show full processlist;
BlackPanxnothing else
jkavalikwhat was the query getting this error? the alter?
BlackPanxhere's the output
BlackPanxof my script
BlackPanxthat does alter
BlackPanxand was running in screen.
SkiOnehi, we are migrating data centers. I setup the new master to replicate from the old master to keep it in sync. Is it kosher to have the new slave, replicate from the new master that is replicating from the old master?
jkavalikBlackPanx, what mysql version? something had to be holding a lock (possibly a metadata lock) on the table so the alter did not even start
BlackPanxMySQL 5.6.23
jkavalikdo you have a screen of the "show processlist" output?
BlackPanxthere was only this alter running
BlackPanxi don't have screenshot
BlackPanxbut... i checked
BlackPanxonly this alter was running
jkavalikand was the alter really running? or just waiting?
jkavaliksounds really weird
BlackPanxthese were queries
BlackPanxand this statusCube failed
BlackPanxwith this wait lock
BlackPanxit is weird, never happened before
BlackPanxand when reproducing problem in test environment, same my.cnf settings, it works.
BlackPanxsame version of mysql, same parameters, everything the same, just instance is other. and even that is AWS, so it's basically the same.
salleBlackPanx: There could be another idle transaction holding locks, but doing nothing at the moment. It will appear with state=sleep in processlist
BlackPanxthere were no sleeping transactions, it's clean database, nothing connects to it, it's nowhere configured
BlackPanxthis script is the only one connecting
BlackPanxi'll retry the whole thing
BlackPanxby recreating whole database
BlackPanxand i'll log the processlist, but there were no other connections or queries going on
BlackPanxit's nowhere configured except this script that was suppose to upgrade the database
BlackPanxbut hardware issues cannot interfear with this waiting for innodb lock ?
BlackPanxlike, let's say AWS had disk issues and mysql couldn't acquire lock
BlackPanxin time
jkavalikif there is no one else to lock it, then slow or otherwise problematic hardware has no way to interfere imho
jkavalikfiles use different kinds of locks
BlackPanxso there could be no hardware interfearance here
BlackPanxagain same thing
BlackPanxhere's processlist
BlackPanxFatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction' in /root/mab-upsert20160718/shared/php/library/C/AbstractDatabaseMaintainer.php:224
BlackPanxthat NULL process on id 4 was another sysadmin just doing show processlist;
salleBlackPanx: SHOW ENGINE INNODB STATUS will tell you what is locked and what holds the lock
salleBlackPanx: Also check for disk full errors in .err log
BlackPanxhere's my show innodb status
BlackPanxwhen i try to run this alter
DarkUraniumis there a way to set MySQL's SQL-related flags on a per-connection basis?
DarkUranium(instead of changing the behavior of the entire daemon)
DarkUraniumnamely, I'd prefer if SQL was std-compliant
BlackPanxsalle: jkavalik: it seems this thing has to do with index or something...
salleDarkUranium: Don't you trust the manual? It clearly says you can do it
DarkUraniumsalle, all I found in the manual was entire-daemon stuff
salleDarkUranium: Really?
salle!m DarkUranium sql mode
ubiquity_botDarkUranium: See
salleDarkUranium: Are you saying you did read this page and didn't see SET SESSION ...; there?
DarkUraniumsalle, no, it was another page, I think
DarkUraniumand no need to be a patronizing asshole.
DarkUraniumthanks for reminding me about part of why I disliked MySQL
throsturis there a smarter way to check what timestamp the current query is from? I'm thinking SELECT *, NOW() ...
throsturthis adds the timestamp to every row, but can't I just make the first or last row be timestamp only somehow?
sallethrostur: Why do you need that?
sallethrostur: Why do you need this timestamp?
throstursalle: to do queries like "select * from foo where ts > bar" where bar is always the last timestamp
sallethrostur: MAX(ts) is the last timestamp in the table
throstursalle: I want the current timestamp of THIS EXECUTED QUERY, not the last timestamp in the database, this query executes on the read slave and happens concurrently with thousands of other queries
sallethrostur: What for?
Xgcsalle: To handle time warp and time travel. We want to ignore timestamps that will be created in the (our) future.
sallethrostur: If query happens concurrently with thousands of other queries you can get hundreds of them executed at the same time.
sallethrostur: Let me ask another way: Suppose you have some SELECT which starts at 18:15:05 and finishes two seconds later at 18:15:07. What timestamp do you want to get for such query?
throstursalle: the earliest
sallethrostur: In such case SELECT TIMESTAMP() ..; can give you any value in between
throsturis there some way to force a query to take longer than usual?
sallethrostur: Earliest what? The point in time when query was received by the server? The time it was parsed? The time the table(s) were opened and the query started "working" or something else?
throsturI don't *really* care, basically I just want to ensure that I always get fresh data from the query
throsturi.e. I don't want the same row twice
throstur(in consecutive queries)
salle!t throstur xy
ubiquity_botthrostur: The X-Y problem is when you want to do X, but you don't know how. You think you can hack X if you can just do Y, but you don't know how to do Y either. You ask (us) for help with Y. We think Y is a strange problem to want to solve... So just ask us about X.
throsturwell now you now
throsturI also answered this at [14:59] <throstur> salle: to do queries like "select * from foo where ts > bar" where bar is always the last timestamp
throsturmaybe the jargon made it seem ambiguous
throstursalle: so what is wrong with my current "hack", and what would you suggest I do instead?
sallethrostur: I suggest you to explain what problem you are trying to solve
throstursalle: I want to execute a read query every X minutes, but it could be X +- Y minutes, anyway the next time the query gets executed it looks for stuff from then till now, rather than since history began
throsturI don't understand how I can explain it otherwise, maybe you can give me a hint?
sallethrostur: You are dealing with transactions and multi-versioning. If you assume there is current state of the data in particular point in time you are wrong about it.
throsturI didn't quite understand that
throsturI'm (basically) just trying to count new rows, can I not assume anything about that?
sallethrostur: In general you can't use timestamp for that
throsturOK, but I'm only hearing the problem, not the reason why it's a problem
sallethrostur: Suppose you SELECT NOW() ...; but at the time the server timestamp is fetched there are 5 other transactions which started *earlier* and are INSERTing rows in your table with *earlier* timestamps. They get COMMITED *later* after you have fetched your timestamp and you miss these rows
throsturthis makes sense
throsturnow I can see why max(ts) makes sense
throsturbut I am unsure if max(ts) doesn't have this same problem
sallethrostur: It does have same problem due to transaction isolation
throstursalle: so where do I go from here, I'm perfectly happy with my 'hack', but you're claiming there's a better, more correct way, right?
sallethrostur: Your current transaction does not see uncommited changes. Unless indeed you force SERIALIZABLE for all client apps or lock the tables which will kill the performance
sallethrostur: Better way for what? I don't know what kind of problem you are trying to solve
throsturI'm just fetching count(1) on some rows for metric collection
throsturI don't really see a problem other than the one you have explaiend
throstursalle: to paraphrase, my 'problem' is exactly what you have described. How would you handle such an issue?
XgcI read the story a few times. I still don't know what he's asking.
salleXgc: Me too
XgcThe problem with transaction isolation doesn't answer the question about what the real purpose of the query is.
XgcMaybe a simple testcase / example would help.
XgcBut don't use ts if you mean CURRENT_TIMESTAMP.
XgcJust add CURRENT_TIMESTAMP to the INSERTs in the testcase, if you mean ts at time of insert.... or TIMESTAMP in the SELECT list or criteria, if you mean timestamp at the time of the select, etc. The description was too ambiguous.
salleXgc: I use ts quite often as a name of TIMESTAMP column :)
XgcACTION nods
salleXgc: .. and I did my best to explain why "CURRENT_TIMESTAMP of SELECT" does not make sense
salleXgc: In MySQL world NOW() is more common simply because it is shorter
XgcRight. There's no easy way to avoid issues with the in-transit (different session) operations if he's assuming the time of the select will see all the data in some ts range. Some of them may not be committed yet.
XgcAt least with his apparent approach.
throsturOK so in a little more detail then... I'm running a query that looks like SELECT COUNT(1), id, CURRENT_TIMESTAMP() FROM my_table WHERE insert_ts > {0} GROUP BY id; -- I do stuff with the rows and next time I execute it I subsittute {0} with rows[0][2] (i.e. current timestamp of topmost row)
throsturIm doing it to collect status line counts as metrics
throsturdoes that make any sense?
Xgcthrostur: I think I get your intent. It won't work, since all inserts may not have been committed and have ts values less than the next range. So you'll miss them.
throsturso TL;DR there is no way to guarantee that no rows get missed without a huge performance hit
Xgcthrostur: Correct. Not with your approach.
throsturXgc: could you suggest an alternative approach?
throstur(if not, could you explain what information you would require as a prerequisite to suggest an alternative?{
Xgcthrostur: I can't think of one that doesn't either change teh isolation level (as noted above) or include some type of queue.
throsturso basically if I don't *really* care if I count it 100% correctly, I should just do it this way and accept the consequences?
throsturalright, that pretty much solves it for me
throsturis there a huge performance hit in adding the CURRENT_TIMESTAMP() column to every row?
Xgcthrostur: Not sure how that helps.
throstursorry, what I really mean is: is it cheaper to run 2 queries (select NOW() ; -- and select * from what I want) or should I run select *,now() from what I want
XgcYou still have the problem of uncommitted transactions with timestamps you may ignore in the new range.
throsturright but I'm just thinking about performance now
throsturI don't care about uncommitted transactions
throstur(my wording may seem careless; I'm just trying to stave away from ambiguity)
XgcOk. Is your question about "every row in the select" ?
Xgcthrostur: You're not asking about inserting into the target table, correct?
throsturno, this is read-only
XgcOverhead is low, relatively. It's all relative.
throsturI'm just wondering about the performance impact of select a,b, NOW() as opposed to select a,b ; select NOW()
Xgcthrostur: If I'm not mistaken, NOW() will be calculated once for all rows.
throsturgreat, exactly what I was hoping
throsturthank you
sallethrostur: Use time in the past not NOW() :)
throstursalle: you mean to prevent lost rows or?
throsturnote that I'm still on the use case where I'm selecting a timestamp to use in a later query
sallethrostur: If for example you know all transactions fininsh within less than 10 seconds you can process all rows with ts < NOW() - INTERVAL 20 SECOND and store that as "next timestamp" for the next query
sallethrostur: Or play safe and use NOW() - INTERVAL 1 HOUR or more
throsturroger that, excellent advice!
throsturthat's the opposite of safe haha
throsturway too many duplicate counts
throsturthe query is meant to run every X minutes, where X could be as low as 1
sallethrostur: My point is that you can easily select all the rows which are inserted and committed until some time ago, but it is not possible to tell which rows are inserted and commited until current timestamp
USvERHey, how can i get my json array as table? something like this SELECT * FROM JSON_ARRAY(1,2,3)
martixyHello. I have a table with many rows that I'm trying to speed up, specifically, there is one query based on an FK + datetime that is being horribly slow. I can see there are indexes for both(one I created explicitly, the one for the FK which seems auto-generated). Problem is when I do ...WHERE FK = x AND datetime > y, it runs like shit and exlain says it doesn't use the index(es). MariaDB10.
passagemartixy: mysql can essentially use one index per table per select
passagemartixy: you probably need a composite index
martixySo I have to create a composite
martixyHeh, right.
passage!m martixy multiple column
ubiquity_botmartixy: See
passagemartixy: leftmost is important
martixyWhat? I don't get that last statement.
passagemartixy: the order of columns in the composite index. see the url above ^^
archivistI prefer to say one index per row of the explain
USvERIs there any possibility to operate on JSON like on table?
Twistpassage: the interesting question is "why isn't mysql using either index in the first example?"
passageTwist: query optimizer can decide to not use existing index if a table scan would be faster in its estimation
Twistseems like this wasn't the case though.
passageTwist: who knows what the cardinality of the fk is? not me
archivist and this part may refer to 90% of the table datetime > y
passagemartixy: google is your friend :-)
martixyOh, it's a many-to-one.
passagemartixy: you'll find over time that indexing boolean columns is often of little use, depending on the distribution of a/b values
passagemartixy: indexes work best when the value you're searching for happens < 30% of the time (loosely true)
archivistbest is unique
passagemartixy: at some point the optimizer decides that just scanning the entire table will be more efficient
USvERIs there someone who knows about JSON and how to use it?
martixyI see. Also, in this particular use case, the chance that condition will not be met is meager and transitive. :)
archivistI know not to use JSON
martixyUSvER: JSON has nothing do to with mySQL. In fact, one might argue if you put json in an RDBMS, you're doing something wrong.
martixyUSvER: You might try $javascript
USvERmartixy, MySQL developers are doing something wrong, I get it
USvERmartixy, postgresql devs are doing something wrong too
martixyI think postgres has document store functionality.
throsturLet's say I have tables A and B, I want to count all rows in A, grouped by a key in B, but if there are no rows to count I want to display 0 -- how do I do this?
throsturexample: SELECT count(1), FROM A as a (JOIN?) B as b ON = WHERE insert_ts > NOW() - interval 1 hour GROUP BY; --
throsturReturns fooname | 234 ; barname | 421 ; bizname | 29; should return: fooname | 234 ; barname | 421 ; bizname | 29 ; buzzname | 0
SilentNinja1In MySQL, how do I know how the network buffer is usage is?
SilentNinja1I have large server with 244GB memory, and I set max_allowed_packet=32 , but I'm concern about performance issues when I have over 1500 connections to the database. Any help would be appreciated!
SilentNinja1that's max_allowed_packet=32Mb
thumbsSilentNinja1: you can use 1G - the buffer is only allocated as needed.
SilentNinja1OK, but how do I monitor network buffer usage?
SilentNinja1I just want to make sure my App isn't killing the database with network data. thanks
SilentNinja1& with over 1500 connections, I'm concerned if our Apps are sending too much data over the network that may impact the Database's performance.
ezek2k4is this normal? /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/ --open-files-limit=18526 --pid-file=/var/lib/mysql/
ezek2k4i dont even have a lot of traffic maybe 2k/month
nbettenhCan I have a mysql trigger change the insert query from: insert (column1,column2,column3) values (val1,val2,val3); to insert (column1,column2,column3) values (val1,val2,val3),(val4,val5,val6); ?
LJHSLDJHSDLJHAre there other ways/tools to grab data tables from ms access into mysql, other than mdb-exports?
LJHSLDJHSDLJHit keeps throwing an error mdb-export -H -I mysql databaseBKP.mdb students > students.sql
LJHSLDJHSDLJHSegmentation fault (core dumped)
thumbsLJHSLDJHSDLJH: just export the data to csv files, and import those into MySQL
thumbsLJHSLDJHSDLJH: you can create the table structures manually.
TwistLJHSLDJHSDLJH: can you just connect to mysql with the windows ODBC driver and insert from access into a linked table?
LJHSLDJHSDLJHTwist: I'm on ubuntu
TwistAh. I was assuming you had a way to run Access.
thumbsLJHSLDJHSDLJH: perhaps workbench can help you with that
LJHSLDJHSDLJHthumbs: I'm trying with workbench but not sure how to create an ODBC connection to ms access
LJHSLDJHSDLJHor even if there is a driver for linux
ezek2k4how can i check why mysql is using 15% how my memory
nvidhiveLJHSLDJHSDLJH: unixODBC can do that
sagestoneI I doubt this highly, but thought I'd ask. I have a csv with 80 columns and 110,000 rows. I want to import it as a table to my database. Is there a way to do this without creating the table and every field manually? IDC how it would be done, just as long as I don't have to create the table manually.
TopHatGuywhat does "in production" or "production use" mean?
TopHatGuyI keep seeing the words everywhere
passageTopHatGuy: customer facing, as opposed to development
mgriffinTopHatGuy: so when you build an app, you test it in "staging", "uat", "dev", "pre-production" etc
mgriffin"production" being customer facing, as passage said
passageTopHatGuy: you don't test things on a production server, because they break :-)
passageTopHatGuy: be badass, wear this T:
LJHSLDJHSDLJHnvidhive: unixODBC is crap, it keeps crashing ODBC Administrator
nvidhivewth is easysoft?
nvidhivelibmdbodbc is what I have used
nvidhivebut it has been awhile
nvidhiveunixODBC with the MS MSSQL driver works like a charm if you configure kerberos and get a ticket
nvidhiveI don't use GUIs, so I am sorry.. I have no idea what you're doing
nvidhivebesides, the unixODBC stuff is kind of off-topic
nvidhivebut good luck! :)
domasHi! is there bittorrent-like distribution layer for streaming query results for MySQL?
domaswell, for any streaming result, what is the best way to multicast the answer
passagefax machine
martixySo here's a question: Is there a good point to putting an index on a boolean value, where the ratio of T/F is very large and the queries will be on the small end?
dragonheartrarely but it can be of use when combined with other fields to make compound index
martixyRarely in general or in the specific case I described?
dragonheartactually the disparate ratio may be of use for you.
martixyI'm just not entirely sure if mysql will even decide to use it.
martixy...cuz who knows how the optimizer works.
dragonheart!t martixy use explain
ubiquity_botmartixy: and
dragonheart!t martixy query optimization
martixyOh, that last one will be useful.
dragonheartif you are stuck. provide details as below
dragonheart!t martixy idfk
ubiquity_botmartixy: Please paste your query, the EXPLAIN select.., and the relevant SHOW CREATE TABLE/s in one pastebin with the sql formatted so it is readable
martixyHeh. Nah, I don't have the data to test right now.
dragonheartmartixy: if its like an archive flag where you rarely search archived material then perhaps put that in a separate table.
dragonheartI assuming you aren't searching just on this flag, so I assume its going to be a compound index with something else.
martixyWell, it's gonna be a big bunch of pooled data. And they'll have categories. And the flag is going to be whether we're seeing a category for the first time.
martixyAnd the point is going to be to get all the new thingies and process them, so they work with the other bits of the app.
dragonheartsounds like its worth using a message queue for such things. sql databases implement message queues horribly so picking another product for this is probably best.
martixyWell, they're gonna be processed by hand.
sarkishey all - on ubuntu... trying to install a package not related to mysql is trying to restart mysql - anyone see behavior like this?
sarkisi.e. in our case.. apt-get install git
sarkisfor some reason restarted mysql - no clue why? :(
dragonheartperhaps it didn't complete its install the first time. its probably unrelated to git, it just happened to be re-triggered by the apt-get run.
dragonheartlook at: dpkg -l | grep -v ^ii
sarkisrc percona-server-server-5.6 5.6.30-76.3-1.trusty amd64 Percona Server database server binaries
sarkisrc percona-server-server-5.7 5.7.10-3-1.trusty amd64 Percona Server database server binaries
sarkishi percona-xtradb-cluster-56 5.6.30-25.16-1.trusty amd64 Percona XtraDB Cluster with GaleraY
sarkishi - i believe just means hold
dragonhearti believe you should check properly. and if you have troubles seek help from a distro channel who understand apt better than me.
thumbsdefinitely ask a debian/ubuntu channel
LiuYanhi, i executed `mysql_upgrade` to 5.7.13 yesterday, and now `flush pvilileges` show warnings "Truncated incorrect time value: 'N'". (Fedora 24 with MySQL official repository)
LiuYanand I can't grant privileges to user now, and when old user try connecting, either "Access denied for user ...." or "You must reset your password using ALTER user statement before executing this statement" error will occur
ss23Go do that then
ss23Like, alter that users password
dragonheartLiuYan: also 0000 isnt' a valid time any more.
LiuYanss23: thanks, i tried, but it's an temp workaround: it will work for some minutes, but after some minutes, if will failed to login again
ss23I'm honestly not sure what the problem is exactly, but can you delete the user entirely and try again? It sounds like invalid data somewhere
LiuYandragonheart: tha warning message shows "Truncated incorrect time value: 'N'", it seems mysql trying to cast a string 'N' to timestamp, not '0000-00-00 00:00:00'
LiuYandragonheart: i also issued `repair table mysql.user` and `optimize table mysql.user`, these statements are successfully issued, but not fixed the problem
LiuYani also checked the column order between `select * from user` and the schema file /usr/share/mysql/mysql_system_tables.sql, the column order are correct