Saturday, June 23, 2012

Re: Troubleshooting "too many concurrent requests"

Hi Ken,

I'm the one that asked both of those questions. :-)

I still hit the limits and my application becomes unusable after a
while with everything waiting for the SQL processes that are hung and
using up the 100 connection limit it seems. It's only after I restart
my instance in the API console that things start working again. Is
there a way to programatically restart the instance? Is there a way to
reduce the timeout on the connections so that it's the same/less than
the GAE timeout? This way, long-running SQL queries would fail and
GAE could catch them, instead of GAE failing on deadline and the SQL
sitting there using up a connection.

Other ideas? I can't be the only one with 200+ clients that
occasionally trigger GC-SQL queries. I have a hard time believing a
Google cloud product isn't designed to scale to my miniscule load.

Thanks!
Jake



On Fri, Jun 22, 2012 at 4:12 PM, Ken Ashcraft <kash@google.com> wrote:
> These might help:
> http://stackoverflow.com/questions/10424442/what-are-the-connection-limits-for-google-cloud-sql-from-app-engine-and-how-to
> http://stackoverflow.com/questions/10158805/whats-a-good-approach-to-managing-the-db-connection-in-a-google-cloud-sql-gae/10438622#10438622
>
>
> On Fri, Jun 22, 2012 at 2:48 AM, Jake Czyz <jjc.work@gmail.com> wrote:
>>
>> Hello,
>>
>> I've been working on a GAE / Cloud SQL app for many weeks and keep
>> hitting the wall on various GC-SQL limits that have forced me to
>> refactor my app several times.  A persistent one I'm running into is
>> the error:
>>
>> google.storage.speckle.python.api.rdbms.0: Instance has too many
>> concurrent requests: 101
>>
>> Does anyone have any advice for troubleshooting this error or
>> otherwise coding database connections in a way that
>> minimizes/mitigates this limitation?
>>
>> My app has a couple hundred clients, and I've already had to refactor
>> it substantially to use the datastore instead of GC-SQL for most
>> interaction with clients (data is then synced to-from GC-SQL via cron
>> jobs... which also are exceeding the 60 second Deadline occasionally,
>> but that's another issue...).  Right now, there's only a single type
>> of request that my clients still perform that needs to access the
>> GC-SQL DB directly (as opposed to datastore), due to consistency
>> requirements. This request happens, ballpark, once every 30 seconds or
>> so, on average, though sometimes there are a bunch in a row (i.e. one
>> every couple seconds for a minute or two).
>>
>> I can't even view processes (i.e. via "show processlist") and kill
>> threads via the Google APIs console SQL Prompt, as it displays "Failed
>> to load databases."   I know that restarting my instance (and waiting
>> several minutes) usually will get me back on track, but I've gotten
>> frustrated and am wondering what other strategies or ideas folks have
>> for dealing with the database connection limits.
>>
>> Thanks in advance,
>> Jake
>
>

No comments:

Post a Comment