Monday, June 25, 2012

Re: Troubleshooting "too many concurrent requests"

On Sat, Jun 23, 2012 at 5:46 AM, Jake Czyz <jjc.work@gmail.com> wrote:
Hi Ken,

I'm the one that asked both of those questions. :-)

I still hit the limits and my application becomes unusable after a
while with everything waiting for the SQL processes that are hung and
using up the 100 connection limit it seems.  It's only after I restart
my instance in the API console that things start working again.  Is
there a way to programatically restart the instance?

No, sorry.
 
Is there a way to
reduce the timeout on the connections so that it's the same/less than
the GAE timeout?

Not really.  You can reduce the deadlock timeout within mysql.
 
You could also set up a cron job to kill long-running queries.
 
Other ideas?  I can't be the only one with 200+ clients that
occasionally trigger GC-SQL queries.   I have a hard time believing a
Google cloud product isn't designed to scale to my miniscule load.


200+ simultaneous clients sounds like thousands of QPS.  I wouldn't call that miniscule.  Cloud SQL doesn't do anything magical to make your database scale.  It is a mysql server running on fault tolerant storage.  No autosharding or anything like that.  Keep in mind that because we do synchronous replication, write latencies are higher than what you'd get with a local mysql.  That might cause you to hold locks longer than you'd expect.

We can increase the limit, but every time we've seen somebody run into this situation, increasing the limit would just make things worse.  More simultaneous queries would mean more contention and more timeouts.

What is your application?

Ken
 

On Fri, Jun 22, 2012 at 4:12 PM, Ken Ashcraft <kash@google.com> wrote:
> These might help:
> http://stackoverflow.com/questions/10424442/what-are-the-connection-limits-for-google-cloud-sql-from-app-engine-and-how-to
> http://stackoverflow.com/questions/10158805/whats-a-good-approach-to-managing-the-db-connection-in-a-google-cloud-sql-gae/10438622#10438622
>
>
> On Fri, Jun 22, 2012 at 2:48 AM, Jake Czyz <jjc.work@gmail.com> wrote:
>>
>> Hello,
>>
>> I've been working on a GAE / Cloud SQL app for many weeks and keep
>> hitting the wall on various GC-SQL limits that have forced me to
>> refactor my app several times.  A persistent one I'm running into is
>> the error:
>>
>> google.storage.speckle.python.api.rdbms.0: Instance has too many
>> concurrent requests: 101
>>
>> Does anyone have any advice for troubleshooting this error or
>> otherwise coding database connections in a way that
>> minimizes/mitigates this limitation?
>>
>> My app has a couple hundred clients, and I've already had to refactor
>> it substantially to use the datastore instead of GC-SQL for most
>> interaction with clients (data is then synced to-from GC-SQL via cron
>> jobs... which also are exceeding the 60 second Deadline occasionally,
>> but that's another issue...).  Right now, there's only a single type
>> of request that my clients still perform that needs to access the
>> GC-SQL DB directly (as opposed to datastore), due to consistency
>> requirements. This request happens, ballpark, once every 30 seconds or
>> so, on average, though sometimes there are a bunch in a row (i.e. one
>> every couple seconds for a minute or two).
>>
>> I can't even view processes (i.e. via "show processlist") and kill
>> threads via the Google APIs console SQL Prompt, as it displays "Failed
>> to load databases."   I know that restarting my instance (and waiting
>> several minutes) usually will get me back on track, but I've gotten
>> frustrated and am wondering what other strategies or ideas folks have
>> for dealing with the database connection limits.
>>
>> Thanks in advance,
>> Jake
>
>

No comments:

Post a Comment