Friday, December 30, 2016

[google-cloud-sql-discuss] Re: How to disable Read-Only in SQL replica?

Hi Nick,

many thanks for a quick turnaround! 

Ann



On Friday, December 30, 2016 at 10:14:50 PM UTC+1, paynen wrote:
Hey Ann Su,

I've replicated this and will be making sure it's seen by the relevant Cloud SQL specialists. I'll return to this thread to post a Public Issue Tracker link where you can follow the progress on this issue.

Cheers,

Nick
Cloud Platform Community Support 

On Wednesday, December 28, 2016 at 9:22:17 AM UTC-5, Ann Su wrote:
Google Console interface allows to set up "read-only" flag to OFF for a read replica. However, regardless of the flag's value, which according to: 

$ gcloud sql instances describe {{replica}}


is correctly set:


[...]

  - name: read_only

    value: 'off'


the global variable in mysql is still set to ON:

mysql> show variables like "read_only";

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| read_only     | ON    |

+---------------+-------+


Is this a bug? 

I'm aware of all related risks of writing to a replica and am willing to do that anyway. If I can't disable read_only mode for built-in replicas, what are my options? Can I manually configure replication using SQL instances in a way that such replica wouldn't get into read_only mode?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/384a79b8-832a-400c-822d-26d6c432181a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: SQL instance is not responding

Hey Adrian,

I can't see any issues with the instance from what I'm able to see. I'll attempt to get someone closer to the Cloud SQL team to take a look. In the meantime, is this still occurring?

I should mention also that a thread like this is much better posted (with more details) to the Cloud SQL Public Issue Tracker, while this forum is better for more general, high-level discussion in which many users can take part.

Cheers,

Nick
Cloud Platform Community Support

On Thursday, December 29, 2016 at 9:11:50 AM UTC-5, Adrian Dybwad wrote:
This instance of MySQL has not responded for around two hours.

I cannot restore a backup to a new database either.

Please let me know what I should do?

purpleair-1293:us-central1:mysql-01

Thank you
Adrian

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/b5c67b18-a6bc-4e3f-ab38-57de503a76c9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: How to disable Read-Only in SQL replica?

Hey Ann Su,

I've replicated this and will be making sure it's seen by the relevant Cloud SQL specialists. I'll return to this thread to post a Public Issue Tracker link where you can follow the progress on this issue.

Cheers,

Nick
Cloud Platform Community Support 

On Wednesday, December 28, 2016 at 9:22:17 AM UTC-5, Ann Su wrote:
Google Console interface allows to set up "read-only" flag to OFF for a read replica. However, regardless of the flag's value, which according to: 

$ gcloud sql instances describe {{replica}}


is correctly set:


[...]

  - name: read_only

    value: 'off'


the global variable in mysql is still set to ON:

mysql> show variables like "read_only";

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| read_only     | ON    |

+---------------+-------+


Is this a bug? 

I'm aware of all related risks of writing to a replica and am willing to do that anyway. If I can't disable read_only mode for built-in replicas, what are my options? Can I manually configure replication using SQL instances in a way that such replica wouldn't get into read_only mode?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/e6c8a3e7-256d-4afd-8546-3e967942f1ee%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: Cloud SQL instance stuck at restarting

I have same problem with one cloud-sql instance. A Try to restart the instance yesterday at 14:26:51 UTC-3 but until now 10:11:00 UTC-3 the job don't terminated. I'm try delete de instance but this did't work too. I need remove the instance for not be billed for time. I'm trying to do this on console but this  isn't working properly.

Somebody can help?



Em quinta-feira, 29 de dezembro de 2016 11:11:50 UTC-3, Adrian Dybwad escreveu:
Did you get this resolved? I just got the same thing happening today.

On Monday, December 19, 2016 at 7:04:02 AM UTC-7, Saurabh Gupta wrote:
I tried restarting cloud sql instance and its stuck. Don't know what to do.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/bdad68f8-cfbf-4798-85db-8ef4b8a9d082%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Wednesday, December 28, 2016

[google-cloud-sql-discuss] Re: How to disable Read-Only in SQL replica?

Hey Ann Su,

I'll be checking whether this is reproducible on my own instances. The documentation implies this is alterable, so it shouldn't be happening in my view. I'll update you with my findings within the next two days.

Cheers,

Nick
Cloud Platform Community Support

On Wednesday, December 28, 2016 at 9:22:17 AM UTC-5, Ann Su wrote:
Google Console interface allows to set up "read-only" flag to OFF for a read replica. However, regardless of the flag's value, which according to: 

$ gcloud sql instances describe {{replica}}


is correctly set:


[...]

  - name: read_only

    value: 'off'


the global variable in mysql is still set to ON:

mysql> show variables like "read_only";

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| read_only     | ON    |

+---------------+-------+


Is this a bug? 

I'm aware of all related risks of writing to a replica and am willing to do that anyway. If I can't disable read_only mode for built-in replicas, what are my options? Can I manually configure replication using SQL instances in a way that such replica wouldn't get into read_only mode?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/a3412438-3395-4de0-b573-42170a9b382d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: Cloud SQL instance stuck at restarting

Did you get this resolved? I just got the same thing happening today.

On Monday, December 19, 2016 at 7:04:02 AM UTC-7, Saurabh Gupta wrote:
I tried restarting cloud sql instance and its stuck. Don't know what to do.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/6ea5ede4-d3e0-4867-a022-a3204e087cf9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] SQL instance is not responding

This instance of MySQL has not responded for around two hours.

I cannot restore a backup to a new database either.

Please let me know what I should do?

purpleair-1293:us-central1:mysql-01

Thank you
Adrian

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/6f6b350a-954a-4f0c-9502-10c557e07c3f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: Google cloud SQL for production purpose

Hello Vaishnavi,


You are to some extent right, when you express concern about the performance of an app engine application using CloudSQL 2nd Generation: network latency for connections between App Engine standard environment and Second Generation instances is approximately double the latency for connections to First Generation instances. Also, at present, there is no support for query caching.


Besides Cloud SQL, you have the choice of other storage products, that may better suit your purpose.


Regarding scaling, it may be worthwhile bringing to mind that Datastore is designed with scalability in mind, whereas MySQL does not scale well in practice.


This being said, you may design your application from bottom up with scalability in mind, following useful generally applicable good design practices: designing for scale.


You may have a look at other options, such as setting up and managing your own high-performance SQL server instance using the Compute Engine. This way, you can implement query caching as desired at relatively low cost.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/dbc4b141-08ed-4fe5-be2d-d9cd17511f66%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] How to disable Read-Only in SQL replica?

Google Console interface allows to set up "read-only" flag to OFF for a read replica. However, regardless of the flag's value, which according to: 

$ gcloud sql instances describe {{replica}}


is correctly set:


[...]

  - name: read_only

    value: 'off'


the global variable in mysql is still set to ON:

mysql> show variables like "read_only";

+---------------+-------+

| Variable_name | Value |

+---------------+-------+

| read_only     | ON    |

+---------------+-------+


Is this a bug? 

I'm aware of all related risks of writing to a replica and am willing to do that anyway. If I can't disable read_only mode for built-in replicas, what are my options? Can I manually configure replication using SQL instances in a way that such replica wouldn't get into read_only mode?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/c11bf89c-0bd1-4d39-9c5f-396b107d576e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Friday, December 23, 2016

[google-cloud-sql-discuss] Google cloud SQL for production purpose

Hi,
I am developing enterprise level Web application using Google app engine with Java as primary language, also using Google cloud SQL 2nd generation instance as database . My application will have to deal with say half a million users creating lots of bills, invoices , recording transactions on an everyday basis . I did read a lot of posts which said that query caching does not work with cloud SQL and performance is too slow. My application has faster processing needs in terms of CRUD operations and I really need clarity on performance , back up , latency, storage and optimization issues. If at all query caching is not supported with cloud SQL, will enabling query caching and other optimization techniques using JPA providers like eclipselink work ?

Any help asap will be appreciated !
Thanks :)

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/cf2aa7c4-ae82-471c-903f-348d3425d840%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Wednesday, December 21, 2016

[google-cloud-sql-discuss] Re: Unix Socket sometimes not present

Hey Myles,

Other than the hint on connection pooling, I've been unable to reproduce the issues you've seen, despite using a pooled connection as well. Is this still occurring despite switching to a non-pooled connection model? 

Regards,

Nick
Cloud Platform Community Support


On Monday, December 19, 2016 at 4:33:04 PM UTC-5, paynen wrote:
Hey Myles,

You've done an extraordinary job in cataloging the information needed to look into this further. Apologies that I've not got anything definitive to relate, as I'm in the process of attempting to replicate this behaviour. It appears to be related to connection pooling, although I'm not sure exactly how. I hope to update this thread within the next 2 days with more information.

Regards,

Nick
Cloud Platform Community Support

On Friday, December 9, 2016 at 3:20:17 PM UTC-5, Myles Bostwick wrote:
Hi Nick,

I've been continuing to further isolate the problem by reducing the rate at which I process data. The error appears to fluctuate on an instance when at a slower rate (in this case 6/m). I've attached some screenshots of my log, on one instance. The info messages are successful calls, the warnings are failed in at least one connection attempt and the criticals are failed in all three attempts to connect.  I am still seeing a pretty high failure rate of around 30-40 percent vs about 50% at a rate of 10/s. At this point I don't have a theory as to what is happening, 6/m is about 2 orders of magnitude slower than our ideal rate of 10/s.

I'm going to try 2 more tests

1. Change MySQLdb to version 1.2.4 and 1.2.5 (presently at "latest" which is 1.2.4b4 apparently)
2. Try these iterations on CloudSQL First Generation now that the inefficient SQL is no longer present.

Thanks again for looking, I hope we can get to a solution on this.

Cheers,

Myles

On Thursday, December 8, 2016 at 2:25:58 PM UTC-7, Myles Bostwick wrote:
Hi Nick,

I appreciate you getting back to me.

I've attached an example that produces the behavior, through testing I've come to better understand a little what's going on. Once reaching a certain rate, while processing mysql is under load, the error is returned. Originally I created an example that just sent "SELECT 1" to mysql and could not induce the error, so MySQL has to be under some load.

The example I've attached induces the error, though there are two classes of errors:

1. The "No such file" error I've originally reported
2. The understandable Deadlock error from mysql

I'm not concerned about #2 as that's just a SQL optimization I've already taken care of, but I still receive the "No such file" error in my production code without a single deadlock error.

"The patterns and frequences of connections on your instances"

I have a taskqueue that is setup to process all my sql connections, so that 1, requests aren't hampered by SQL operations and 2, we can rate limit the interactions with MySQL. My autoscale settings are set to restrict to 6 connections, due to the 12 connection limit, I wanted to give it some room.

"The way in which you've determined it's isolated to a given instance"

I don't think I was clear when I described this, once an instance starts exhibiting this symptom, subsequent requests to the instance all exhibit the symptom. I determined this just by checking the instance id in the log messages and through restricting to a given instance id.

Hopefully my attached example will enable reproduction. It's a little messy because I consolidated several files into one file for ease of transport, but it should be fairly straightforward.

Let me know if you need me to file this away to the issue tracker you mentioned, I'm happy to do so.

Cheers,

Myles

On Tuesday, December 6, 2016 at 1:44:54 PM UTC-7, paynen wrote:
Hey Myles,

An issue like this would be best reported in the Public Issue Tracker for Cloud SQL.

Nonetheless, we can continue to work on it here until we can determine more accurately what should go in the report. There are some more pieces of information that could be relevant here:

* The patterns and frequencies of connections on your instances 

* The way in which you've determined it's isolated to a given instance

* A description of the pipeline task you're performing so we can attempt to reproduce the issue that way

Feel free to add any more information you think could help in reproducing this issue.

Regards,

Nick
Cloud Platform Community Support

On Monday, December 5, 2016 at 1:02:56 PM UTC-5, Myles Bostwick wrote:
Hello All,

I've been playing around with the pipelines library (https://github.com/GoogleCloudPlatform/appengine-pipelines) in my standard appengine environment and managed to get enough parallel instances running to cause problems. Sometimes, persistent to an instance it seems, I don't have a unix socket (as seen in the below stacktrace). Things I have tried to mitigate this:

  1. On failed connect, try 3 times with 4 seconds between attempts (since I have a 60 second max request time, this seems HUGE)
  2. Reducing concurrent requests per instance to 8
  3. Reducing max concurrent requests to 20
  4. Upgrading from First Generation to Second Generation Cloud SQL instances
  5. Restricting to a single instance (which does "solve" the problem, but doesn't meet my throughput goals)
I've consistently been able to reproduce this issue, in its inconsistent nature. My final recourse is going to be wait the 3 times, then connect via IP instead. Any thoughts suggestions in addition to this would be greatly appreciated.

Thanks in advance,

Myles


...
  File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 1778, in connect return self._connection_cls(self, **kwargs) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 60, in __init__ self.__connection = connection or engine.raw_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 1847, in raw_connection return self.pool.unique_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 280, in unique_connection return _ConnectionFairy._checkout(self) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 644, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 440, in checkout rec = pool._do_get() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 1057, in _do_get return self._create_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 285, in _create_connection return _ConnectionRecord(self) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 411, in __init__ self.connection = self.__connect() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 538, in __connect connection = self.__pool._creator() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/strategies.py", line 96, in connect connection_invalidated=invalidated File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/strategies.py", line 90, in connect return dialect.connect(*cargs, **cparams) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/default.py", line 377, in connect return self.dbapi.connect(*cargs, **cparams) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4/MySQLdb/connections.py", line 190, in __init__ super(Connection, self).__init__(*args, **kwargs2) OperationalError: (OperationalError) (2062, 'Cloud SQL socket open failed with error: No such file or directory') None None

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/9634ac3e-ae1f-4af3-b0a2-7300a39a836e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tuesday, December 20, 2016

[google-cloud-sql-discuss] Re: Cloud SQL instance stuck at restarting

We would need additional information to have a better idea of how to help.
  • Are you using a first or second generation instance?
  • What action was requested or performed that caused the instance to enter this stuck state?
  • What state is shown when using the Instances: get API?  Other relevant metadata about the instance may be found here as well.

On Monday, December 19, 2016 at 9:04:02 AM UTC-5, Saurabh Gupta wrote:
I tried restarting cloud sql instance and its stuck. Don't know what to do.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/cc2e17b3-61f5-49b5-9c7c-eb24fffafa7c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Monday, December 19, 2016

[google-cloud-sql-discuss] Re: Cannot connect to 2nd generation instance from eclipse

Hi,

Sorry for the delay.  Take a look at the updated sample: https://github.com/GoogleCloudPlatform/java-docs-samples/tree/master/appengine/cloudsql -- If you still have trouble, let us know.

Les

On Monday, December 5, 2016 at 9:23:12 AM UTC-8, Vaishnavi Manjunath wrote:
Hi,
   I was able to create sample RESTful application using jersey and jackson and deploy it on GAE. The next thing i want to do is to connect to Google cloud 2nd generation SQL instance which I am unable to do. I will just list steps i followed for better clarity.

1. created a 2nd gen instance associated with GAE project, under root user i changed password.
2. created a database and with help of cloud shell, created a table with 1 record in that database.
3. made a note of instance connection name, database name, username and password.
4. installed mysql installer, was able to install mysql server , command client tool
5. made note of my machine IP address, added it under authorized networks of sql instance.
5. installed mysql-connector-java-5.1.14 jar file , added it to the LIB folder of WEB/INF(added build path)
6. made changes to appengine-web.xml  as per this link: https://cloud.google.com/appengine/docs/java/cloud-sql/
   in both the property name tags i have added instance connection name, dbname, username and password.( i read in few blogs that its not needed in both)- please provide clarity on this and i am not sure what these 
  two tags mean.
7. So after all this, I went to project->google->appengine settings-> enable google cloud sql 
  under this there are three options
  •   mysql instance and google cloud sql instance (i want to know difference between both) and app engine SQL instance
I am not sure which one to configure to test locally and when app is deployed.], more light on local mysql instance would be of great help :)

8. I tried configuring all the three , and ended up getting error:
   

"Could not connect to Profile (test.GoogleCloudSQL.AppEngineInstance).
Error creating SQL Model Connection connection to Profile (test.GoogleCloudSQL.AppEngineInstance). (Error: com.mysql.jdbc.Driver)
com.mysql.jdbc.Driver
Error creating Google Cloud SQL Connection factory connection to Profile (test.GoogleCloudSQL.AppEngineInstance). (Error: com.mysql.jdbc.Driver)
com.mysql.jdbc.Driver"

I did read that this problem exists because jar file is not added, but i have added it as mentioned above.

9. I even installed DTP in eclipse, added new driver connection with MySQL, MySQL jdbc driver and included the same jar file as mentioned above.
I am using GAE version : 1.9.46 , eclipse NEON.
I would also like to mention that I havent written any JDBC code to connect, was exploring this option first as required in: https://developers.google.com/eclipse/docs/cloudsql-createapp#locally

  
So I am absolutely clueless as what to do next, your help will be appreciated  :)

Thanks,
Vaishnavi :)

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/9f8f478d-1ead-4467-86df-6c9ccb7f71c3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: Unix Socket sometimes not present

Hey Myles,

You've done an extraordinary job in cataloging the information needed to look into this further. Apologies that I've not got anything definitive to relate, as I'm in the process of attempting to replicate this behaviour. It appears to be related to connection pooling, although I'm not sure exactly how. I hope to update this thread within the next 2 days with more information.

Regards,

Nick
Cloud Platform Community Support

On Friday, December 9, 2016 at 3:20:17 PM UTC-5, Myles Bostwick wrote:
Hi Nick,

I've been continuing to further isolate the problem by reducing the rate at which I process data. The error appears to fluctuate on an instance when at a slower rate (in this case 6/m). I've attached some screenshots of my log, on one instance. The info messages are successful calls, the warnings are failed in at least one connection attempt and the criticals are failed in all three attempts to connect.  I am still seeing a pretty high failure rate of around 30-40 percent vs about 50% at a rate of 10/s. At this point I don't have a theory as to what is happening, 6/m is about 2 orders of magnitude slower than our ideal rate of 10/s.

I'm going to try 2 more tests

1. Change MySQLdb to version 1.2.4 and 1.2.5 (presently at "latest" which is 1.2.4b4 apparently)
2. Try these iterations on CloudSQL First Generation now that the inefficient SQL is no longer present.

Thanks again for looking, I hope we can get to a solution on this.

Cheers,

Myles

On Thursday, December 8, 2016 at 2:25:58 PM UTC-7, Myles Bostwick wrote:
Hi Nick,

I appreciate you getting back to me.

I've attached an example that produces the behavior, through testing I've come to better understand a little what's going on. Once reaching a certain rate, while processing mysql is under load, the error is returned. Originally I created an example that just sent "SELECT 1" to mysql and could not induce the error, so MySQL has to be under some load.

The example I've attached induces the error, though there are two classes of errors:

1. The "No such file" error I've originally reported
2. The understandable Deadlock error from mysql

I'm not concerned about #2 as that's just a SQL optimization I've already taken care of, but I still receive the "No such file" error in my production code without a single deadlock error.

"The patterns and frequences of connections on your instances"

I have a taskqueue that is setup to process all my sql connections, so that 1, requests aren't hampered by SQL operations and 2, we can rate limit the interactions with MySQL. My autoscale settings are set to restrict to 6 connections, due to the 12 connection limit, I wanted to give it some room.

"The way in which you've determined it's isolated to a given instance"

I don't think I was clear when I described this, once an instance starts exhibiting this symptom, subsequent requests to the instance all exhibit the symptom. I determined this just by checking the instance id in the log messages and through restricting to a given instance id.

Hopefully my attached example will enable reproduction. It's a little messy because I consolidated several files into one file for ease of transport, but it should be fairly straightforward.

Let me know if you need me to file this away to the issue tracker you mentioned, I'm happy to do so.

Cheers,

Myles

On Tuesday, December 6, 2016 at 1:44:54 PM UTC-7, paynen wrote:
Hey Myles,

An issue like this would be best reported in the Public Issue Tracker for Cloud SQL.

Nonetheless, we can continue to work on it here until we can determine more accurately what should go in the report. There are some more pieces of information that could be relevant here:

* The patterns and frequencies of connections on your instances 

* The way in which you've determined it's isolated to a given instance

* A description of the pipeline task you're performing so we can attempt to reproduce the issue that way

Feel free to add any more information you think could help in reproducing this issue.

Regards,

Nick
Cloud Platform Community Support

On Monday, December 5, 2016 at 1:02:56 PM UTC-5, Myles Bostwick wrote:
Hello All,

I've been playing around with the pipelines library (https://github.com/GoogleCloudPlatform/appengine-pipelines) in my standard appengine environment and managed to get enough parallel instances running to cause problems. Sometimes, persistent to an instance it seems, I don't have a unix socket (as seen in the below stacktrace). Things I have tried to mitigate this:

  1. On failed connect, try 3 times with 4 seconds between attempts (since I have a 60 second max request time, this seems HUGE)
  2. Reducing concurrent requests per instance to 8
  3. Reducing max concurrent requests to 20
  4. Upgrading from First Generation to Second Generation Cloud SQL instances
  5. Restricting to a single instance (which does "solve" the problem, but doesn't meet my throughput goals)
I've consistently been able to reproduce this issue, in its inconsistent nature. My final recourse is going to be wait the 3 times, then connect via IP instead. Any thoughts suggestions in addition to this would be greatly appreciated.

Thanks in advance,

Myles


...
  File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 1778, in connect return self._connection_cls(self, **kwargs) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 60, in __init__ self.__connection = connection or engine.raw_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 1847, in raw_connection return self.pool.unique_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 280, in unique_connection return _ConnectionFairy._checkout(self) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 644, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 440, in checkout rec = pool._do_get() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 1057, in _do_get return self._create_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 285, in _create_connection return _ConnectionRecord(self) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 411, in __init__ self.connection = self.__connect() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 538, in __connect connection = self.__pool._creator() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/strategies.py", line 96, in connect connection_invalidated=invalidated File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/strategies.py", line 90, in connect return dialect.connect(*cargs, **cparams) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/default.py", line 377, in connect return self.dbapi.connect(*cargs, **cparams) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4/MySQLdb/connections.py", line 190, in __init__ super(Connection, self).__init__(*args, **kwargs2) OperationalError: (OperationalError) (2062, 'Cloud SQL socket open failed with error: No such file or directory') None None

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/5c8286c0-9546-43c4-921c-6754ac398076%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Cloud SQL instance stuck at restarting

I tried restarting cloud sql instance and its stuck. Don't know what to do.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/fa4f08d7-be8e-4662-964f-9e3c34023ef2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Saturday, December 17, 2016

[google-cloud-sql-discuss] Stuck on PENDING_CREATE state while creating replica

About 2 weeks ago, I was trying to create external master replication following this instruction:

  https://cloud.google.com/sql/docs/replication/configure-external-master


But even after creating replication db, replica instances are under pending status.


FYI this is what it looks like.

$ curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
>  --header 'Content-Type: application/json' \
>  -X GET \
>  https://www.googleapis.com/sql/v1beta4/projects/mathpresso-1016/instances/qbase-primary-repl-read/
{
 "kind": "sql#instance",
 "selfLink": "https://www.googleapis.com/sql/v1beta4/projects/mathpresso-1016/instances/qbase-primary-repl-read",
 "name": "qbase-primary-repl-read",
 "connectionName": "mathpresso-1016:qbase-primary-repl-read",
 "etag": some values,
 "project": "mathpresso-1016",
 "state": "PENDING_CREATE",
 "backendType": "FIRST_GEN",
 "databaseVersion": "MYSQL_5_6",
 "region": "us-central",
 "currentDiskSize": "281812187",
 "maxDiskSize": "268435456000",
 "settings": {
  "kind": "sql#settings",
  "settingsVersion": "1",
  "authorizedGaeApplications": [],
  "tier": "D4",
  "backupConfiguration": {
   "kind": "sql#backupConfiguration",
   "startTime": "01:00",
   "enabled": false,
   "binaryLogEnabled": false
  },
  "pricingPlan": "PER_USE",
  "replicationType": "ASYNCHRONOUS",
  "activationPolicy": "ALWAYS",
  "ipConfiguration": {
   "ipv4Enabled": false,
   "authorizedNetworks": []
  },
  "databaseReplicationEnabled": true,
  "crashSafeReplicationEnabled": true
 },
 "serverCaCert": {
  "kind": "sql#sslCert",
  "instance": "qbase-primary-repl-read",
  ...
  ...
  "createTime": "2016-12-02T17:45:20.934Z",
  "expirationTime": "2018-12-02T17:46:20.934Z"
 },
 "instanceType": "READ_REPLICA_INSTANCE",
 "masterInstanceName": "mathpresso-1016:qbase-primary-repl-internalmaster",
 "ipv6Address": some values,
 "replicaConfiguration": {
  "kind": "sql#replicaConfiguration",
  "failoverTarget": false
 }
}


Now I want to delete them, and I don't know how to delete these instances (every time I try it shows error)

Could you please delete these instances?

qbase-primary-repl-internalmaster
qbase-primary-repl-read
qbase-primary-replica-internalmaster
qbase-primary-replica-read
qbase-primary-replica-readonly

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/f6c19a0c-f5d7-487b-91f1-1584be7afbc0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thursday, December 15, 2016

[google-cloud-sql-discuss] Re: CloudSQL create replica slave problem (external replication)

Apologies for the lengthy radio silence.  I could not reproduce this issue on my end.  I got a 200 every time I tried, including when trying with some deliberate typos in the request body values.

Have you tried creating the replica from the API explorer for Instances: insert?  You can supply the JSON request body the way it's provided in the curl example and authenticate through the web OAuth flow.  If you're authorized, it should return a 200 OK with a operation resource.  You can inspect the progress of the operation providing its name to the Operations: get API.  This should show the errors encountered in replica creation or success.

On Friday, December 2, 2016 at 2:40:14 PM UTC-5, Nicholas (Google Cloud Support) wrote:
I am working on trying to reproduce this error but no luck yet.  I still have a few things to try.  I'll get back to you early next week.

On Tuesday, November 29, 2016 at 9:05:25 AM UTC-5, Lisei Andrei wrote:

I'm following this guide here for creating a cloudsql slave for an external mysql database.

I've managed to create an internal master but when i try to create the replica I have this error: 
I'm the project owner of the GCP project. 

{
 "error": {
  "errors": [
   {
    "domain": "global",
    "reason": "notAuthorized",
    "message": "The client is not authorized to make this request."
   }
  ],
  "code": 403,
  "message": "The client is not authorized to make this request."
 }
}

Thanks! 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/af6c63e6-0209-4522-8a2c-acf3c073f6fa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Friday, December 9, 2016

[google-cloud-sql-discuss] Re: Unix Socket sometimes not present

Hi Nick,

I've been continuing to further isolate the problem by reducing the rate at which I process data. The error appears to fluctuate on an instance when at a slower rate (in this case 6/m). I've attached some screenshots of my log, on one instance. The info messages are successful calls, the warnings are failed in at least one connection attempt and the criticals are failed in all three attempts to connect.  I am still seeing a pretty high failure rate of around 30-40 percent vs about 50% at a rate of 10/s. At this point I don't have a theory as to what is happening, 6/m is about 2 orders of magnitude slower than our ideal rate of 10/s.

I'm going to try 2 more tests

1. Change MySQLdb to version 1.2.4 and 1.2.5 (presently at "latest" which is 1.2.4b4 apparently)
2. Try these iterations on CloudSQL First Generation now that the inefficient SQL is no longer present.

Thanks again for looking, I hope we can get to a solution on this.

Cheers,

Myles

On Thursday, December 8, 2016 at 2:25:58 PM UTC-7, Myles Bostwick wrote:
Hi Nick,

I appreciate you getting back to me.

I've attached an example that produces the behavior, through testing I've come to better understand a little what's going on. Once reaching a certain rate, while processing mysql is under load, the error is returned. Originally I created an example that just sent "SELECT 1" to mysql and could not induce the error, so MySQL has to be under some load.

The example I've attached induces the error, though there are two classes of errors:

1. The "No such file" error I've originally reported
2. The understandable Deadlock error from mysql

I'm not concerned about #2 as that's just a SQL optimization I've already taken care of, but I still receive the "No such file" error in my production code without a single deadlock error.

"The patterns and frequences of connections on your instances"

I have a taskqueue that is setup to process all my sql connections, so that 1, requests aren't hampered by SQL operations and 2, we can rate limit the interactions with MySQL. My autoscale settings are set to restrict to 6 connections, due to the 12 connection limit, I wanted to give it some room.

"The way in which you've determined it's isolated to a given instance"

I don't think I was clear when I described this, once an instance starts exhibiting this symptom, subsequent requests to the instance all exhibit the symptom. I determined this just by checking the instance id in the log messages and through restricting to a given instance id.

Hopefully my attached example will enable reproduction. It's a little messy because I consolidated several files into one file for ease of transport, but it should be fairly straightforward.

Let me know if you need me to file this away to the issue tracker you mentioned, I'm happy to do so.

Cheers,

Myles

On Tuesday, December 6, 2016 at 1:44:54 PM UTC-7, paynen wrote:
Hey Myles,

An issue like this would be best reported in the Public Issue Tracker for Cloud SQL.

Nonetheless, we can continue to work on it here until we can determine more accurately what should go in the report. There are some more pieces of information that could be relevant here:

* The patterns and frequencies of connections on your instances 

* The way in which you've determined it's isolated to a given instance

* A description of the pipeline task you're performing so we can attempt to reproduce the issue that way

Feel free to add any more information you think could help in reproducing this issue.

Regards,

Nick
Cloud Platform Community Support

On Monday, December 5, 2016 at 1:02:56 PM UTC-5, Myles Bostwick wrote:
Hello All,

I've been playing around with the pipelines library (https://github.com/GoogleCloudPlatform/appengine-pipelines) in my standard appengine environment and managed to get enough parallel instances running to cause problems. Sometimes, persistent to an instance it seems, I don't have a unix socket (as seen in the below stacktrace). Things I have tried to mitigate this:

  1. On failed connect, try 3 times with 4 seconds between attempts (since I have a 60 second max request time, this seems HUGE)
  2. Reducing concurrent requests per instance to 8
  3. Reducing max concurrent requests to 20
  4. Upgrading from First Generation to Second Generation Cloud SQL instances
  5. Restricting to a single instance (which does "solve" the problem, but doesn't meet my throughput goals)
I've consistently been able to reproduce this issue, in its inconsistent nature. My final recourse is going to be wait the 3 times, then connect via IP instead. Any thoughts suggestions in addition to this would be greatly appreciated.

Thanks in advance,

Myles


...
  File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 1778, in connect return self._connection_cls(self, **kwargs) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 60, in __init__ self.__connection = connection or engine.raw_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 1847, in raw_connection return self.pool.unique_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 280, in unique_connection return _ConnectionFairy._checkout(self) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 644, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 440, in checkout rec = pool._do_get() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 1057, in _do_get return self._create_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 285, in _create_connection return _ConnectionRecord(self) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 411, in __init__ self.connection = self.__connect() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 538, in __connect connection = self.__pool._creator() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/strategies.py", line 96, in connect connection_invalidated=invalidated File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/strategies.py", line 90, in connect return dialect.connect(*cargs, **cparams) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/default.py", line 377, in connect return self.dbapi.connect(*cargs, **cparams) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4/MySQLdb/connections.py", line 190, in __init__ super(Connection, self).__init__(*args, **kwargs2) OperationalError: (OperationalError) (2062, 'Cloud SQL socket open failed with error: No such file or directory') None None

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/483ff18b-13cb-417a-8a29-1d096fd626d1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thursday, December 8, 2016

Re: [google-cloud-sql-discuss] Stored Procedure disappearing in Google Cloud SQL - 1st Generation Instance

Hi David,

Thanks for your reply. I told mysql.proc table crashing in my development instance. But Production instance I never get any crash in mysql.proc table.

Development instance and Production instance both are different. I asked after investigation only.

There was no mysql.proc table crash.  I 'll try your solution to run a FLUSH TABLES operation after creating/changing stored procedures/events/triggers etc. 

Thanks David.


On Wednesday, December 7, 2016 at 10:47:28 PM UTC+5:30, David Newgas wrote:
Hi,

Without investigating in greater detail, I believe the most likely cause is that the mysql.proc table as you mention. The reason that this can occur is that MySQL 5.x only supports MyISAM for the mysql.* tables, but MyISAM can lose data across a crash/hard restart. I believe the best solution is to run a FLUSH TABLES operation after creating/changing stored proceedures/events/triggers etc, to make sure that they have been written to disk and will survive any future restarts.

David

On Tue, Dec 6, 2016 at 11:17 PM, Dhandapani Sattanathan <dhandapani....@ssomens.com> wrote:
Cloud SQL Team,

Past 3 years I 'm using Google Cloud SQL with Google App Engine.

In My development instance, most of the time lastly ran stored procedure disappearing. I thought it's development instance, We got proc table crashing.Some other reason stored procedure disappearing.

But In My Production instance, Two stored procedure recently ran disappeared.Because of this, We got a big issue in my production.

Could you please explain Why like this happening in Google Cloud SQL 1st Generation instance?

Thanks in advance.


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/daf127a4-08d1-4d79-a197-8c215c02b40d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/a61112b9-e511-4078-ae1f-a8757dcf9439%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: Unix Socket sometimes not present

import pipeline
import random
import time
import uuid
import logging
import sqlalchemy
from sqlalchemy import orm
from sqlalchemy.orm import aliased
from sqlalchemy.sql import expression
from sqlalchemy.exc import OperationalError
from sqlalchemy.ext import declarative
from sqlalchemy.pool import NullPool


LOG = logging.getLogger(__file__)
BASE = declarative.declarative_base()
PRODUCT_A = 100
PRODUCT_B = 100


# For clarity and ease of reproduction, I have moved most functions and classes into one file

'''
Example Run:
# Modify the connection parameters below to match your settings.
db = getDb()
createTables(db.engine) # First run only
trace = str(uuid.uuid4()) # Makes finding log messages easier
pipe = SqlConnectionProductA(trace)
pipe.start()
'''


#### Begin database functions

# FIXME: Add your own credentials
def getDb():
# This fetches our appengine specific information, you should create your own
# connectionParameters, connectionString, poolclass = SqlHelper.getDatabaseConnectionInfo()

# For clarity, this is the same connection string used to connect
connectionString = 'mysql+mysqldb://{user}@/{db}?unix_socket=/cloudsql/{dbInstanceName}'

'''
The connectionParameters is in the form:
{
"db": 'mydb',
"user": 'myuser',
"dbInstanceName": 'myinstance'
}

'''

return getDbContainer(connectionParameters, connectionString)

class DbContainer(object):

def __init__(self, engine, connection):
self.engine = engine
self.connection = connection

def reconnect(self):
try:
self.connection = self.engine.connect()
except OperationalError:
LOG.exception("Failed to reconnect.")


class SessionTransaction(object):
def __init__(self, db=None):
if db is None:
raise Exception("No sql session, cannot proceed")
self.session = orm.sessionmaker(bind=db.engine)()

def __enter__(self):
return self.session

def __exit__(self, type, value, tb):
try:
self.session.commit()
except OperationalError:
self.session.rollback()

self.session.close()


def getEngine(connectionParameters, connectionString):
return sqlalchemy.create_engine(connectionString.format(**connectionParameters), poolclass=NullPool)


def getDbContainer(connectionParameters, connectionString):
engine = getEngine(connectionParameters, connectionString)
maxAttempts = 3
dbConnection = None
for attempt in xrange(0, maxAttempts):
try:
dbConnection = engine.connect()
break
except OperationalError:
LOG.warn("Failed to connect parameters %s. Retrying... %s of %s" % (connectionString.format(**connectionParameters), attempt + 1, maxAttempts))
time.sleep(4)
if attempt == maxAttempts - 1:
LOG.exception("Max reconnect attempts reached.")
raise

return DbContainer(engine, dbConnection)


class ThrashingParentTable(BASE):
__tablename__ = 'ThrashingParentTable'
id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
uuid = sqlalchemy.Column(sqlalchemy.String(150))
marker1 = sqlalchemy.Column(sqlalchemy.Integer, nullable=False)
marker2 = sqlalchemy.Column(sqlalchemy.Integer, nullable=False)


# A model representing a temporary table used in a metadata update query
class ThrashingTable(BASE):
__tablename__ = 'ThrashingTable'
id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True)
uuid = sqlalchemy.Column(sqlalchemy.String(150))
marker1 = sqlalchemy.Column(sqlalchemy.Integer, nullable=False)
marker2 = sqlalchemy.Column(sqlalchemy.Integer, nullable=False)
transaction_id = sqlalchemy.Column(sqlalchemy.String(150))


def createTables(engine):
ThrashingTable.__table__.create(bind=engine)
ThrashingParentTable.__table__.create(bind=engine)


#### End database functions

#### Begin Pipeline classes
class SqlConnectionProductA(pipeline.Pipeline):

def run(self, trace):
for x in xrange(0, PRODUCT_A):
yield SqlConnectionProductB(trace)


class SqlConnectionProductB(pipeline.Pipeline):

def run(self, trace):
for x in xrange(0, PRODUCT_B):
yield SqlConnectionTester(trace)


class SqlConnectionTester(pipeline.Pipeline):

def run(self, trace):
key = str(uuid.uuid4())
transaction_id = str(uuid.uuid4())
rows = []
# Create some data that can be inserted
for _ in xrange(0, 50):
rows.append(ThrashingTable(
uuid=key,
marker1=random.randint(0, 25),
marker2=random.randint(1481000000, 1481220141),
transaction_id=transaction_id
))

db = getDb()

'''
This is a little contrived, but close to what I was doing upon initially seeing the problem.
Essentially all you need are some transactions that cause the database to do
some real work. I've since simplified my sql, but they still do *some* work and
it seems enough to induce the problem. This just exacerbates it.
'''
with SessionTransaction(db) as session:
# Insert some rows into our thrash table
session.bulk_save_objects(rows)
session.flush()
thrashTable = aliased(ThrashingTable)

subquery = session.query(ThrashingParentTable)\
.join(thrashTable, thrashTable.uuid == key)\

q = session.query(thrashTable.uuid.label(ThrashingParentTable.uuid.name),
thrashTable.marker1.label(ThrashingParentTable.marker1.name),
thrashTable.marker2.label(ThrashingParentTable.marker2.name)) \
.filter(~subquery.exists()) \
.filter(thrashTable.transaction_id == transaction_id)

# Insert tables to the parent table, if they don't already exists
insert = expression.insert(ThrashingParentTable) \
.from_select([ThrashingParentTable.uuid,
ThrashingParentTable.marker1,
ThrashingParentTable.marker2],
q)

session.execute(insert)

session.query(ThrashingTable)\
.filter(ThrashingTable.transaction_id == transaction_id)\
.delete()


#### End pipeline classes

Hi Nick,

I appreciate you getting back to me.

I've attached an example that produces the behavior, through testing I've come to better understand a little what's going on. Once reaching a certain rate, while processing mysql is under load, the error is returned. Originally I created an example that just sent "SELECT 1" to mysql and could not induce the error, so MySQL has to be under some load.

The example I've attached induces the error, though there are two classes of errors:

1. The "No such file" error I've originally reported
2. The understandable Deadlock error from mysql

I'm not concerned about #2 as that's just a SQL optimization I've already taken care of, but I still receive the "No such file" error in my production code without a single deadlock error.

"The patterns and frequences of connections on your instances"

I have a taskqueue that is setup to process all my sql connections, so that 1, requests aren't hampered by SQL operations and 2, we can rate limit the interactions with MySQL. My autoscale settings are set to restrict to 6 connections, due to the 12 connection limit, I wanted to give it some room.

"The way in which you've determined it's isolated to a given instance"

I don't think I was clear when I described this, once an instance starts exhibiting this symptom, subsequent requests to the instance all exhibit the symptom. I determined this just by checking the instance id in the log messages and through restricting to a given instance id.

Hopefully my attached example will enable reproduction. It's a little messy because I consolidated several files into one file for ease of transport, but it should be fairly straightforward.

Let me know if you need me to file this away to the issue tracker you mentioned, I'm happy to do so.

Cheers,

Myles

On Tuesday, December 6, 2016 at 1:44:54 PM UTC-7, paynen wrote:
Hey Myles,

An issue like this would be best reported in the Public Issue Tracker for Cloud SQL.

Nonetheless, we can continue to work on it here until we can determine more accurately what should go in the report. There are some more pieces of information that could be relevant here:

* The patterns and frequencies of connections on your instances 

* The way in which you've determined it's isolated to a given instance

* A description of the pipeline task you're performing so we can attempt to reproduce the issue that way

Feel free to add any more information you think could help in reproducing this issue.

Regards,

Nick
Cloud Platform Community Support

On Monday, December 5, 2016 at 1:02:56 PM UTC-5, Myles Bostwick wrote:
Hello All,

I've been playing around with the pipelines library (https://github.com/GoogleCloudPlatform/appengine-pipelines) in my standard appengine environment and managed to get enough parallel instances running to cause problems. Sometimes, persistent to an instance it seems, I don't have a unix socket (as seen in the below stacktrace). Things I have tried to mitigate this:

  1. On failed connect, try 3 times with 4 seconds between attempts (since I have a 60 second max request time, this seems HUGE)
  2. Reducing concurrent requests per instance to 8
  3. Reducing max concurrent requests to 20
  4. Upgrading from First Generation to Second Generation Cloud SQL instances
  5. Restricting to a single instance (which does "solve" the problem, but doesn't meet my throughput goals)
I've consistently been able to reproduce this issue, in its inconsistent nature. My final recourse is going to be wait the 3 times, then connect via IP instead. Any thoughts suggestions in addition to this would be greatly appreciated.

Thanks in advance,

Myles


...
  File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 1778, in connect return self._connection_cls(self, **kwargs) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 60, in __init__ self.__connection = connection or engine.raw_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/base.py", line 1847, in raw_connection return self.pool.unique_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 280, in unique_connection return _ConnectionFairy._checkout(self) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 644, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 440, in checkout rec = pool._do_get() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 1057, in _do_get return self._create_connection() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 285, in _create_connection return _ConnectionRecord(self) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 411, in __init__ self.connection = self.__connect() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/pool.py", line 538, in __connect connection = self.__pool._creator() File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/strategies.py", line 96, in connect connection_invalidated=invalidated File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/strategies.py", line 90, in connect return dialect.connect(*cargs, **cparams) File "/base/data/home/apps/s~hydrovu-dev/api:2-0-0.397476056640579045/lib/sqlalchemy/engine/default.py", line 377, in connect return self.dbapi.connect(*cargs, **cparams) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4/MySQLdb/connections.py", line 190, in __init__ super(Connection, self).__init__(*args, **kwargs2) OperationalError: (OperationalError) (2062, 'Cloud SQL socket open failed with error: No such file or directory') None None

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/2042a095-d425-49ae-9ad7-b16a31c5ceb6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.