Saturday, September 29, 2012

Re: Regarding compatibility of google app engine

Google Appengine currently supports Java, Python and Go (experimental) for the server languages.   Check out https://developers.google.com/appengine/.

Hope that helps,

Rob


On Sat, Sep 29, 2012 at 10:07 AM, anand d <anand339@gmail.com> wrote:
I am developing a solution using ODK collect. My application has a android app. The android app posts a large form using ODK collect.  The server side is done in php. Can u tell me a solution to deploy the app in google app engine. My application requires saving the posted data into the server database (mysql) and then have to use different queries for data manipulation.

Regarding compatibility of google app engine

I am developing a solution using ODK collect. My application has a android app. The android app posts a large form using ODK collect.  The server side is done in php. Can u tell me a solution to deploy the app in google app engine. My application requires saving the posted data into the server database (mysql) and then have to use different queries for data manipulation.

Friday, September 28, 2012

Re: Digest for google-cloud-sql-discuss@googlegroups.com - 1 Message in 1 Topic

Hi Keith,

Sorry for the delay.  Something weird is going on here.  The OpenConnection latencies that we see are typically < 10ms.  We'll try to reproduce the issue and get back to you.

Ken

On Wed, Sep 26, 2012 at 1:46 PM, Keith Mukai <keith.mukai@essaytagger.com> wrote:
Okay, I have appstats data. On the simple poke servlet the delay is all in the OpenConnection call. There's only one series of Open-Exec-Close calls, as expected.

RPC
 @0ms rdbms.OpenConnection real=2267ms api=0ms
 @2267ms rdbms.Exec real=3ms api=0ms
 @2271ms rdbms.CloseConnection real=2ms api=0ms



And here's the fastest poke response in the series, just a few moments earlier:

RPC
 @0ms rdbms.OpenConnection real=25ms api=0ms
 @25ms rdbms.Exec real=4ms api=0ms
 @29ms rdbms.CloseConnection real=2ms api=0ms


I now have the DB poke running every two minutes and the response times are still all over the place (confirmed with appstats). There was no other load on the site during this period:

 (35) 2012-09-26 20:00:00.130 "GET /tasks/poke_db" 200 real=84ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (36) 2012-09-26 19:58:00.572 "GET /tasks/poke_db" 200 real=1525ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (37) 2012-09-26 19:52:00.329 "GET /tasks/poke_db" 200 real=51ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (38) 2012-09-26 19:50:00.283 "GET /tasks/poke_db" 200 real=2016ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (39) 2012-09-26 19:48:00.445 "GET /tasks/poke_db" 200 real=1818ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (40) 2012-09-26 19:46:00.368 "GET /tasks/poke_db" 200 real=58ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (41) 2012-09-26 19:40:00.356 "GET /tasks/poke_db" 200 real=1944ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (42) 2012-09-26 19:38:00.284 "GET /tasks/poke_db" 200 real=23ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (43) 2012-09-26 19:36:00.906 "GET /tasks/poke_db" 200 real=282ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (44) 2012-09-26 19:34:00.161 "GET /tasks/poke_db" 200 real=4729ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (45) 2012-09-26 19:32:00.534 "GET /tasks/poke_db" 200 real=2604ms cpu=0ms api=0ms overhead=0ms (3 RPCs)


I've attached a nightmare response time -- 54 seconds --  from one of my more complex pages. That page can load just fine in 500-800ms. Sometimes the delays get into the 12s range. The 30-60s range is less common but not uncommon. 

Appstats helped me track down the source of my orphaned open connections, so at least my code is keeping things tidy with the DB on its end now. But that hasn't seemed to have made any difference to the performance.


Any other ideas or tests to try? 

---------------------------
Keith Mukai, M.Ed.
High School English Teacher
EssayTagger.com Founder & CEO

Volunteer Coach, Niles West HS Boys, Girls Gymnastics Teams



On Mon, Sep 24, 2012 at 9:39 AM, <google-cloud-sql-discuss@googlegroups.com> wrote:
    Keith Mukai <keith.mukai@essaytagger.com> Sep 23 09:05AM -0700  

    Hey Joe,
     
    I'm still using my Spring-Hibernate-CloudSQL framework, but I'm seeing some
    crazily unpredictable Cloud SQL response times. I have a trivial DB "poke"
    servlet that just opens a session, runs "SELECT 1", and reports an elapsed
    time. I have a cron job that hits the poke every five minutes and the
    execution time is all over the place. Here's a series of response times
    with me hitting the poke servlet and refreshing the page over and over
    again (in millis):
     
    67
    117
    59
    37
    317
    2764
    2404
    2080
    35
    28
    1459
    29
    31
    23
    3959
    68
    132
     
    The variability of these results just confounds me. If the code was doing
    something horribly inefficient, the results should be more uniformly
    terrible. But it's obviously capable of running the poke well enough. It's
    the 1000-3000ms responses that are freaking me out. I had zero other load
    during these tests.
     
    On DB-heavy pages, *each* DB hit can scale up to similar 3000+ms times,
    causing the whole page to take an awful 12-15 seconds to load. And then the
    next request a second later might take 250ms.
     
    At first I thought it was just a warmup issue (thus the creation of the
    poke task) -- and it does seem to do better when it's fielding constant
    requests -- but then a long load will happen again at unpredictable times
    (and not related to an /_ah/warmup).
     
    I have some issues on my end -- Spring isn't closing my Hibernate
    connections and I can't figure out why. But as request load increases, I
    tend to see better, more stable performance. Obviously we'd expect the
    opposite if those stale connections were bogging down the system. I'm not
    using a connection pool layer with Hibernate.
     
    I don't know if it's specifically a Cloud SQL issue, but that seems the
    most likely culprit at the moment.
     
    What other info can I provide to help diagnose what's going on?

     

You received this message because you are subscribed to the Google Group google-cloud-sql-discuss.
You can post via email.
To unsubscribe from this group, send an empty message.
For more options, visit this group.



Wednesday, September 26, 2012

Re: Digest for google-cloud-sql-discuss@googlegroups.com - 1 Message in 1 Topic

Okay, I have appstats data. On the simple poke servlet the delay is all in the OpenConnection call. There's only one series of Open-Exec-Close calls, as expected.

RPC
 @0ms rdbms.OpenConnection real=2267ms api=0ms
 @2267ms rdbms.Exec real=3ms api=0ms
 @2271ms rdbms.CloseConnection real=2ms api=0ms



And here's the fastest poke response in the series, just a few moments earlier:

RPC
 @0ms rdbms.OpenConnection real=25ms api=0ms
 @25ms rdbms.Exec real=4ms api=0ms
 @29ms rdbms.CloseConnection real=2ms api=0ms


I now have the DB poke running every two minutes and the response times are still all over the place (confirmed with appstats). There was no other load on the site during this period:

 (35) 2012-09-26 20:00:00.130 "GET /tasks/poke_db" 200 real=84ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (36) 2012-09-26 19:58:00.572 "GET /tasks/poke_db" 200 real=1525ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (37) 2012-09-26 19:52:00.329 "GET /tasks/poke_db" 200 real=51ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (38) 2012-09-26 19:50:00.283 "GET /tasks/poke_db" 200 real=2016ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (39) 2012-09-26 19:48:00.445 "GET /tasks/poke_db" 200 real=1818ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (40) 2012-09-26 19:46:00.368 "GET /tasks/poke_db" 200 real=58ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (41) 2012-09-26 19:40:00.356 "GET /tasks/poke_db" 200 real=1944ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (42) 2012-09-26 19:38:00.284 "GET /tasks/poke_db" 200 real=23ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (43) 2012-09-26 19:36:00.906 "GET /tasks/poke_db" 200 real=282ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (44) 2012-09-26 19:34:00.161 "GET /tasks/poke_db" 200 real=4729ms cpu=0ms api=0ms overhead=0ms (3 RPCs)
 (45) 2012-09-26 19:32:00.534 "GET /tasks/poke_db" 200 real=2604ms cpu=0ms api=0ms overhead=0ms (3 RPCs)


I've attached a nightmare response time -- 54 seconds --  from one of my more complex pages. That page can load just fine in 500-800ms. Sometimes the delays get into the 12s range. The 30-60s range is less common but not uncommon. 

Appstats helped me track down the source of my orphaned open connections, so at least my code is keeping things tidy with the DB on its end now. But that hasn't seemed to have made any difference to the performance.


Any other ideas or tests to try? 

---------------------------
Keith Mukai, M.Ed.
High School English Teacher
EssayTagger.com Founder & CEO

Volunteer Coach, Niles West HS Boys, Girls Gymnastics Teams



On Mon, Sep 24, 2012 at 9:39 AM, <google-cloud-sql-discuss@googlegroups.com> wrote:

Group: http://groups.google.com/group/google-cloud-sql-discuss/topics

    Keith Mukai <keith.mukai@essaytagger.com> Sep 23 09:05AM -0700  

    Hey Joe,
     
    I'm still using my Spring-Hibernate-CloudSQL framework, but I'm seeing some
    crazily unpredictable Cloud SQL response times. I have a trivial DB "poke"
    servlet that just opens a session, runs "SELECT 1", and reports an elapsed
    time. I have a cron job that hits the poke every five minutes and the
    execution time is all over the place. Here's a series of response times
    with me hitting the poke servlet and refreshing the page over and over
    again (in millis):
     
    67
    117
    59
    37
    317
    2764
    2404
    2080
    35
    28
    1459
    29
    31
    23
    3959
    68
    132
     
    The variability of these results just confounds me. If the code was doing
    something horribly inefficient, the results should be more uniformly
    terrible. But it's obviously capable of running the poke well enough. It's
    the 1000-3000ms responses that are freaking me out. I had zero other load
    during these tests.
     
    On DB-heavy pages, *each* DB hit can scale up to similar 3000+ms times,
    causing the whole page to take an awful 12-15 seconds to load. And then the
    next request a second later might take 250ms.
     
    At first I thought it was just a warmup issue (thus the creation of the
    poke task) -- and it does seem to do better when it's fielding constant
    requests -- but then a long load will happen again at unpredictable times
    (and not related to an /_ah/warmup).
     
    I have some issues on my end -- Spring isn't closing my Hibernate
    connections and I can't figure out why. But as request load increases, I
    tend to see better, more stable performance. Obviously we'd expect the
    opposite if those stale connections were bogging down the system. I'm not
    using a connection pool layer with Hibernate.
     
    I don't know if it's specifically a Cloud SQL issue, but that seems the
    most likely culprit at the moment.
     
    What other info can I provide to help diagnose what's going on?

     

You received this message because you are subscribed to the Google Group google-cloud-sql-discuss.
You can post via email.
To unsubscribe from this group, send an empty message.
For more options, visit this group.


Tuesday, September 25, 2012

iPhone App To Help You Regain 20/20 Vision Naturally

http://goo.gl/fHiCv - "20/20 Vision" is an iPhone app that will help you regain 20/20 vision naturally, by looking at images that relaxes your eyes and performing eye exercises that strengthen your eyes, anywhere within your busy life (i.e. office cubicle, crowded subway, waiting at the hair salon, etc.), thanks.

Re: Digest for google-cloud-sql-discuss@googlegroups.com - 1 Message in 1 Topic

Wow, that looks amazing! Thanks, Ken!

I'll work on getting appstats set up in the next couple of days and post results when I have them.

I won't be surprised if Hibernate/Spring ends up being the culprit. I've dumped Java in favor of my new love, Django, for my latest projects, but my EssayTagger Java codebase is too significant for a rewrite at this stage. But oh how I long for the day when I'm free of these Java chains... it's a 20-ton tank with too many moving parts when a little Django soapbox car would've sufficed!

---------------------------
Keith Mukai, M.Ed.
High School English Teacher
EssayTagger.com Founder & CEO

Volunteer Coach, Niles West HS Boys, Girls Gymnastics Teams



On Tue, Sep 25, 2012 at 10:14 AM, <google-cloud-sql-discuss@googlegroups.com> wrote:

Group: http://groups.google.com/group/google-cloud-sql-discuss/topics

    Ken Ashcraft <kash@google.com> Sep 24 11:55AM -0700  

    Hi Keith,
     
    An end-to-end test like this is valuable from an end-user's perspective,
    but it is difficult to tell where the time is going. Please use appstats
    to get a better picture:
    https://developers.google.com/appengine/docs/java/tools/appstats
     
    This will tell you if the queries to the database are fast or slow, or
    maybe hibernate is doing thousands of queries unnecessarily. If everything
    looks ok on the database side, I'd look harder at the warmup request. I
    believe a request can be a warmup even if it doesn't go to /_ah/warmup.
     
    Ken
     

     

You received this message because you are subscribed to the Google Group google-cloud-sql-discuss.
You can post via email.
To unsubscribe from this group, send an empty message.
For more options, visit this group.


Monday, September 24, 2012

Re: Hibernate - Spring - Cloud SQL

Hi Keith,

An end-to-end test like this is valuable from an end-user's perspective, but it is difficult to tell where the time is going.  Please use appstats to get a better picture:

This will tell you if the queries to the database are fast or slow, or maybe hibernate is doing thousands of queries unnecessarily.  If everything looks ok on the database side, I'd look harder at the warmup request.  I believe a request can be a warmup even if it doesn't go to /_ah/warmup.

Ken

On Sun, Sep 23, 2012 at 9:05 AM, Keith Mukai <keith.mukai@essaytagger.com> wrote:
Hey Joe,

I'm still using my Spring-Hibernate-CloudSQL framework, but I'm seeing some crazily unpredictable Cloud SQL response times. I have a trivial DB "poke" servlet that just opens a session, runs "SELECT 1", and reports an elapsed time. I have a cron job that hits the poke every five minutes and the execution time is all over the place. Here's a series of response times with me hitting the poke servlet and refreshing the page over and over again (in millis):

67
117
59
37
317
2764
2404
2080
35
28
1459
29
31
23
3959
68
132

The variability of these results just confounds me. If the code was doing something horribly inefficient, the results should be more uniformly terrible. But it's obviously capable of running the poke well enough. It's the 1000-3000ms responses that are freaking me out. I had zero other load during these tests.

On DB-heavy pages, *each* DB hit can scale up to similar 3000+ms times, causing the whole page to take an awful 12-15 seconds to load. And then the next request a second later might take 250ms. 

At first I thought it was just a warmup issue (thus the creation of the poke task) -- and it does seem to do better when it's fielding constant requests -- but then a long load will happen again at unpredictable times (and not related to an /_ah/warmup).

I have some issues on my end -- Spring isn't closing my Hibernate connections and I can't figure out why. But as request load increases, I tend to see better, more stable performance. Obviously we'd expect the opposite if those stale connections were bogging down the system. I'm not using a connection pool layer with Hibernate. 

I don't know if it's specifically a Cloud SQL issue, but that seems the most likely culprit at the moment. 

What other info can I provide to help diagnose what's going on? 

Sunday, September 23, 2012

Re: Hibernate - Spring - Cloud SQL

Hey Joe,

I'm still using my Spring-Hibernate-CloudSQL framework, but I'm seeing some crazily unpredictable Cloud SQL response times. I have a trivial DB "poke" servlet that just opens a session, runs "SELECT 1", and reports an elapsed time. I have a cron job that hits the poke every five minutes and the execution time is all over the place. Here's a series of response times with me hitting the poke servlet and refreshing the page over and over again (in millis):

67
117
59
37
317
2764
2404
2080
35
28
1459
29
31
23
3959
68
132

The variability of these results just confounds me. If the code was doing something horribly inefficient, the results should be more uniformly terrible. But it's obviously capable of running the poke well enough. It's the 1000-3000ms responses that are freaking me out. I had zero other load during these tests.

On DB-heavy pages, *each* DB hit can scale up to similar 3000+ms times, causing the whole page to take an awful 12-15 seconds to load. And then the next request a second later might take 250ms. 

At first I thought it was just a warmup issue (thus the creation of the poke task) -- and it does seem to do better when it's fielding constant requests -- but then a long load will happen again at unpredictable times (and not related to an /_ah/warmup).

I have some issues on my end -- Spring isn't closing my Hibernate connections and I can't figure out why. But as request load increases, I tend to see better, more stable performance. Obviously we'd expect the opposite if those stale connections were bogging down the system. I'm not using a connection pool layer with Hibernate. 

I don't know if it's specifically a Cloud SQL issue, but that seems the most likely culprit at the moment. 

What other info can I provide to help diagnose what's going on? 

Sunday, September 16, 2012

Re: Stopping the instance...

There is no explicit stop but a per use instance will be turned off after 15 minutes of inactivity.

-- Razvan ME


On Sun, Sep 16, 2012 at 6:54 PM, Jeffery Finley <jeff@jefferyfinley.com> wrote:
I'm using the pricing plan of "per use" while I develop the app.

I don't see how to stop the instance so that I'm not charged when I don't need it up and running.  How can I stop the instance?

Thanks

Stopping the instance...

I'm using the pricing plan of "per use" while I develop the app.

I don't see how to stop the instance so that I'm not charged when I don't need it up and running.  How can I stop the instance?

Thanks

Friday, September 14, 2012

Re: Hibernate - Spring - Cloud SQL

Thanks.... 

Den torsdag den 13. september 2012 20.18.37 UTC+2 skrev Joe Faith:
We've had a few questions around using App Engine with Hibernate, Spring and Cloud SQL


and a guestbook sample project is published here:


J

Re: squirrel-sql-3.4.0-MacOSX-install.jar will not install on MacBook with OS X 10.8.1 (12B19)


Joe,

Thanks for your contact. Version is as shown below:-

Thomas-Duffys-MacBook-Air:~ tom$ java -version
java version "1.6.0_35"
Java(TM) SE Runtime Environment (build 1.6.0_35-b10-428-11M3811)
Java HotSpot(TM) 64-Bit Server VM (build 20.10-b01-428, mixed mode)
Thomas-Duffys-MacBook-Air:~ tom$ 

 
Regards,

Thomas G Duffy

On 13 September 2012 22:34, Joe Faith <joefaith@google.com> wrote:
Hi Thomas

What version of Java is being used by default?
(to find this out run 'java -version')

j

On Thu, Sep 13, 2012 at 11:52 AM, Thomas G Duffy <tom@dce.ie> wrote:

Rob,

Thanks for your response.

I've tried this and get this response:-
"Invalid or corrupt jarfile /Users/tom/Downloads/squirrel-sql-3.4.0-MacOSX-install.jar"

 
Regards,

Thomas G Duffy


On 13 September 2012 19:54, Rob Clevenger <rcleveng@google.com> wrote:
Try java -jar [path to jar file]

Does this work?

On Tue, Sep 11, 2012 at 8:55 AM, Thomas G Duffy <tom@dce.ie> wrote:

I would very much like to use Google Code SQL on my MacBook. However, I cannot get past the 1st hurdle. When I try to install squirrel-sql-3.4.0-MacOSX-install.jar,  I get the error message "The Java JAR file squirrel-sql-3.4.0-MacOSX-install.jar could not be launched. Check the Console for possible error messages". This seems to have been a common problem over the last few years. I have read many forum discussions on the article but have not found a solution that works. I have both 32 bit and 64 bit Java SE6 by Apple Inc. installed. Any suggestions would be appreciated.






--
Joe Faith | Product Manager | Google Cloud

Thursday, September 13, 2012

Re: squirrel-sql-3.4.0-MacOSX-install.jar will not install on MacBook with OS X 10.8.1 (12B19)

Hi Thomas

What version of Java is being used by default?
(to find this out run 'java -version')

j

On Thu, Sep 13, 2012 at 11:52 AM, Thomas G Duffy <tom@dce.ie> wrote:

Rob,

Thanks for your response.

I've tried this and get this response:-
"Invalid or corrupt jarfile /Users/tom/Downloads/squirrel-sql-3.4.0-MacOSX-install.jar"

 
Regards,

Thomas G Duffy
BE MAI C.Eng FIEI MICE RConsEI
Chartered Engineer
 
Duffy Consulting Engineers
No. 820 Regus 8th Floor
Al Fardan Office Tower
Al Funduq 61
West Bay, Doha, Qatar
PO Box 31316

Qatar Mobile:      +974 3023 4946
Qatar Landline:   +974 4410 1548
Qatar Fax:            +974 4410 1500
Email:                     tom@dce.ie
Website:                www.dce.ie


DCE Abu Dhabi
Bin Moosa Tower
Hamdan Street
P.O. Box 75142
Abu Dhabi, UAE


UAE Mobile:         +971 50 133 6987
UAE Landline:      
+971 2 496 8480
UAE Fax:               +971 2 645 5535




On 13 September 2012 19:54, Rob Clevenger <rcleveng@google.com> wrote:
Try java -jar [path to jar file]

Does this work?

On Tue, Sep 11, 2012 at 8:55 AM, Thomas G Duffy <tom@dce.ie> wrote:

I would very much like to use Google Code SQL on my MacBook. However, I cannot get past the 1st hurdle. When I try to install squirrel-sql-3.4.0-MacOSX-install.jar,  I get the error message "The Java JAR file squirrel-sql-3.4.0-MacOSX-install.jar could not be launched. Check the Console for possible error messages". This seems to have been a common problem over the last few years. I have read many forum discussions on the article but have not found a solution that works. I have both 32 bit and 64 bit Java SE6 by Apple Inc. installed. Any suggestions would be appreciated.






--
Joe Faith | Product Manager | Google Cloud

Re: squirrel-sql-3.4.0-MacOSX-install.jar will not install on MacBook with OS X 10.8.1 (12B19)


Rob,

Thanks for your response.

I've tried this and get this response:-
"Invalid or corrupt jarfile /Users/tom/Downloads/squirrel-sql-3.4.0-MacOSX-install.jar"

 
Regards,

Thomas G Duffy
BE MAI C.Eng FIEI MICE RConsEI
Chartered Engineer
 
Duffy Consulting Engineers
No. 820 Regus 8th Floor
Al Fardan Office Tower
Al Funduq 61
West Bay, Doha, Qatar
PO Box 31316

Qatar Mobile:      +974 3023 4946
Qatar Landline:   +974 4410 1548
Qatar Fax:            +974 4410 1500
Email:                     tom@dce.ie
Website:                www.dce.ie


DCE Abu Dhabi
Bin Moosa Tower
Hamdan Street
P.O. Box 75142
Abu Dhabi, UAE


UAE Mobile:         +971 50 133 6987
UAE Landline:      
+971 2 496 8480
UAE Fax:               +971 2 645 5535




On 13 September 2012 19:54, Rob Clevenger <rcleveng@google.com> wrote:
Try java -jar [path to jar file]

Does this work?

On Tue, Sep 11, 2012 at 8:55 AM, Thomas G Duffy <tom@dce.ie> wrote:

I would very much like to use Google Code SQL on my MacBook. However, I cannot get past the 1st hurdle. When I try to install squirrel-sql-3.4.0-MacOSX-install.jar,  I get the error message "The Java JAR file squirrel-sql-3.4.0-MacOSX-install.jar could not be launched. Check the Console for possible error messages". This seems to have been a common problem over the last few years. I have read many forum discussions on the article but have not found a solution that works. I have both 32 bit and 64 bit Java SE6 by Apple Inc. installed. Any suggestions would be appreciated.



Which cloudversion is the best

Hi I am facing to use a second hard drive for storing my data. Which is best and what is recommended ..

Hibernate - Spring - Cloud SQL

We've had a few questions around using App Engine with Hibernate, Spring and Cloud SQL


and a guestbook sample project is published here:


J

Re: squirrel-sql-3.4.0-MacOSX-install.jar will not install on MacBook with OS X 10.8.1 (12B19)

Try java -jar [path to jar file]

Does this work?

On Tue, Sep 11, 2012 at 8:55 AM, Thomas G Duffy <tom@dce.ie> wrote:

I would very much like to use Google Code SQL on my MacBook. However, I cannot get past the 1st hurdle. When I try to install squirrel-sql-3.4.0-MacOSX-install.jar,  I get the error message "The Java JAR file squirrel-sql-3.4.0-MacOSX-install.jar could not be launched. Check the Console for possible error messages". This seems to have been a common problem over the last few years. I have read many forum discussions on the article but have not found a solution that works. I have both 32 bit and 64 bit Java SE6 by Apple Inc. installed. Any suggestions would be appreciated.


Tuesday, September 11, 2012

squirrel-sql-3.4.0-MacOSX-install.jar will not install on MacBook with OS X 10.8.1 (12B19)

I would very much like to use Google Code SQL on my MacBook. However, I cannot get past the 1st hurdle. When I try to install squirrel-sql-3.4.0-MacOSX-install.jar,  I get the error message "The Java JAR file squirrel-sql-3.4.0-MacOSX-install.jar could not be launched. Check the Console for possible error messages". This seems to have been a common problem over the last few years. I have read many forum discussions on the article but have not found a solution that works. I have both 32 bit and 64 bit Java SE6 by Apple Inc. installed. Any suggestions would be appreciated.

Monday, September 10, 2012

Re: Error when I call a stored procedure on Cloud SQL in python2.7

Hi Jason,

Sorry about the silence.  We're working on a fix.

Ken

On Mon, Aug 27, 2012 at 1:50 PM, Jason M. Yi <93time@gmail.com> wrote:

Hi,
We're trying to call a stored procedure on Cloud SQL in python2.7.
Here's our simple code:

# stored procedure
CREATE PROCEDURE Test_Procedure (_nmbr INT)
BEGIN
INSERT INTO TEST(AA) VALUES(_nmbr);
SELECT * FROM TEST;
END;

# code in python
conn = rdbms.connect(instance = _INSTANCE_NAME, database = _DATABASE_NAME)
cursor = conn.cursor()

cursor.execute("call Test_Procedure(%s)", (50))
results = cursor.fetchall()
for row in results :
self.response.out.write(str(row[0]))

conn.close()

When I executed the procedure on SQL Prompt, it worked very well, but when I tried to execute it by python code, I've gotten this error message.
  InternalError: fetchall() called before execute

Do I have to do something else when I execute a stored procedure in python code??


Sunday, September 9, 2012

iPhone App To Help You Regain 20/20 Vision Naturally

http://goo.gl/xQy6Q - "20/20 Vision" is an iPhone app that will help you regain 20/20 vision naturally, by looking at images that relaxes your eyes and performing eye exercises that strengthen your eyes, anywhere within your busy life (i.e. office cubicle, crowded subway, waiting at the hair salon, etc.), thanks.

Saturday, September 8, 2012

Re: Cannot create instance - says billing is not enabled

I have fixed it for you. Let me know if you are still having difficulties creating instances.

-Amit

On Sat, Sep 8, 2012 at 9:12 AM, Carl Franklin <carl.franklin@mavenwave.com> wrote:
I had billing on. Turned it off and then back on. No success.

ProjectId: 679371357409

Cannot create instance - says billing is not enabled

I had billing on. Turned it off and then back on. No success.

ProjectId: 679371357409

Wednesday, September 5, 2012

Re: DB-API Field Type Codes in mysql-python

Thanks for the warning, Ken.  I was actually thinking about doing exactly what you suggest: putting a dummy table in the database and testing it for its known type to figure out which codes are being used.  But now it seems essential, since there will be no way to know when JDBC or MySQLdb type codes are being used and there is always a possiblity of rollback.

So I'll do just that.  Thanks again for your responses.

On Wednesday, September 5, 2012 1:44:31 PM UTC-4, Ken Ashcraft wrote:
On Wed, Sep 5, 2012 at 4:06 AM, ahroth <aviv...@gmail.com> wrote:
Thanks, Ken.  But since I've coded my application to handle the JDBC codes, it will immediately break once you switch over to the proper MySQLdb codes.

Will I (and other developers with the same issue) get ample warning to make the switch?

You appear to be the only developer that has noticed...

These kinds of backwards incompatible bugs are tricky because we are dependent on app engine's rollout schedule.  Sometimes app engine will roll out a new release and roll it back because of a bug.  Your app would see the old behavior, then the new behavior, then the old behavior, then the new behavior.
 
 Even better, would it be possible to query the system to find out which code set is currently being used?


I don't see a way to run the system in a "both ways work" mode.  I think you should query a known column and check the type.  For example, create a dummy table with a single varchar column and a single row. When your app instance starts up, query that table and check the type of the column.  Since the typecode for varchar is different, you can tell which mode the system is running in.  Store the mode in a global variable that the app instance can use for its lifetime.

Sorry about the trouble,
Ken
 
Thanks again for your help!

Re: DB-API Field Type Codes in mysql-python

On Wed, Sep 5, 2012 at 4:06 AM, ahroth <avivroth@gmail.com> wrote:
Thanks, Ken.  But since I've coded my application to handle the JDBC codes, it will immediately break once you switch over to the proper MySQLdb codes.

Will I (and other developers with the same issue) get ample warning to make the switch?

You appear to be the only developer that has noticed...

These kinds of backwards incompatible bugs are tricky because we are dependent on app engine's rollout schedule.  Sometimes app engine will roll out a new release and roll it back because of a bug.  Your app would see the old behavior, then the new behavior, then the old behavior, then the new behavior.
 
 Even better, would it be possible to query the system to find out which code set is currently being used?


I don't see a way to run the system in a "both ways work" mode.  I think you should query a known column and check the type.  For example, create a dummy table with a single varchar column and a single row. When your app instance starts up, query that table and check the type of the column.  Since the typecode for varchar is different, you can tell which mode the system is running in.  Store the mode in a global variable that the app instance can use for its lifetime.

Sorry about the trouble,
Ken
 
Thanks again for your help!

Re: DB-API Field Type Codes in mysql-python

Thanks, Ken. But since I've coded my application to handle the JDBC codes, it will immediately break once you switch over to the proper MySQLdb codes.

Will I (and other developers with the same issue) get ample warning to make the switch? Even better, would it be possible to query the system to find out which code set is currently being used?

Thanks again for your help!

Get access to two instances at the same time from the same machine

Dear all,
I would like to get access from my Mac to two different Google Cloud SQL instances using SquirrelSQL.
One instance is tied to one account and the other one is tied to a second account (domains are different: one is free and one is business).
I would like to know if it is possible to add into the file ~/Library/Preferences/com.google.cloud.plist two different OAuth keys in order to be able to get access to different instances at the same time. 
I am able of course to get access to the instances using following steps:
  • rm ~/Library/Preferences/com.google.cloud.plist
  • ./google_sql.sh instance
  • register OAuth key
but it is a pain doing that every time I want to make a switch.
Do you have any suggestion?

Thank you very much for your kind support and best Regards,
Marco.


Tuesday, September 4, 2012

Re: DB-API Field Type Codes in mysql-python

I think this is just an oversight.  We're returning the JDBC integers instead of the mysqldb integers.  I'll file a bug.

Ken


On Sat, Sep 1, 2012 at 2:31 AM, ahroth <avivroth@gmail.com> wrote:
I am using Google App Engine with the Google Cloud SQL module "rdbms".  According to the documentation, rdbms uses DB-API 2.0.  What I've been trying to do is take advantage of the cursor.description attribute, which is a sequence of 7 item sequences (name, type_code,  display_size, internal_size,  precision,  scale, null_ok).  "type_code" is what I'm trying to get at , because I need to pass the data to Google Visualizations, which require types to be declared for each column.

Here's the problem:

"type_code" is returned as an integer.  Different integers represent different types.  I searched madly on the internet for mappings for these codes, and found them for MySQLdb.  The problem is that the ones returned from the Google Cloud SQL rdbms module (for Google App Engine) are different codes!  An integer gets a different code on my local machine than an integer returned from Google Cloud SQL in GAE.

So my questions:
1. Does anyone know where I can get a mapping for the type_codes returned from the rdbms module to GAE?  Or am I going to have to manually test each type in the database?
2. Are there any constants for FIELD_TYPE that I can use with rdbms?  MySQLdb defines these constants in human-readable form, so it would be great if rdbms had the same constant names, so that I wouldn't have to worry about coding for two type mappings.

Can anyone help?  Thanks!

Re: Question about Binary log in google cloud sql

Hi Rast,

Cloud SQL currently does not support option like "--log-bin" to enble binary logging. However, your data is already replicated using using file system replication at multiple data centers to ensure high availability of your database during planned and unplanned failure if data centers.

-Amit

On Mon, Sep 3, 2012 at 3:13 AM, Rast Rastapana <tung_suplex@hotmail.com> wrote:
Can I use mysqld's option like "--log-bin" to enable binary logging for doing data replication or data recovery in google cloud sql? 
if yes, can i look for data in the log file?
if not, does that means google cloud sql implement data replication and recovery for my data already?

Monday, September 3, 2012

Question about Binary log in google cloud sql

Can I use mysqld's option like "--log-bin" to enable binary logging for doing data replication or data recovery in google cloud sql? 
if yes, can i look for data in the log file?
if not, does that means google cloud sql implement data replication and recovery for my data already?

Sunday, September 2, 2012

Re: Would you please tell me more about performance tool ?

There is no access to the raw mysql protocol so you cannot use the mysqladmin tool. Note that the InnoDB tables from INFORMATION_SCHEMA [1] are available and should allow you to find which query is holding locks.


-- Razvan ME


On Sat, Sep 1, 2012 at 9:17 PM, Rast Rastapana <tung_suplex@hotmail.com> wrote:
           Do you support "mysqladmin debug" ( the debug command in MySQL ) ?
 now that  i use some commands to show the performance to check row locking  -  SHOW PROCESSLIST ,SHOW ENGINE INNODB STATUS
 but they can't show which thread holds the table locks that are blocking your query at all.

Thank you in advance!


Saturday, September 1, 2012

Would you please tell me more about performance tool ?

           Do you support "mysqladmin debug" ( the debug command in MySQL ) ?
 now that  i use some commands to show the performance to check row locking  -  SHOW PROCESSLIST ,SHOW ENGINE INNODB STATUS
 but they can't show which thread holds the table locks that are blocking your query at all.

Thank you in advance!

DB-API Field Type Codes in mysql-python

I am using Google App Engine with the Google Cloud SQL module "rdbms".  According to the documentation, rdbms uses DB-API 2.0.  What I've been trying to do is take advantage of the cursor.description attribute, which is a sequence of 7 item sequences (name, type_code,  display_size, internal_size,  precision,  scale, null_ok).  "type_code" is what I'm trying to get at , because I need to pass the data to Google Visualizations, which require types to be declared for each column.

Here's the problem:

"type_code" is returned as an integer.  Different integers represent different types.  I searched madly on the internet for mappings for these codes, and found them for MySQLdb.  The problem is that the ones returned from the Google Cloud SQL rdbms module (for Google App Engine) are different codes!  An integer gets a different code on my local machine than an integer returned from Google Cloud SQL in GAE.

So my questions:
1. Does anyone know where I can get a mapping for the type_codes returned from the rdbms module to GAE?  Or am I going to have to manually test each type in the database?
2. Are there any constants for FIELD_TYPE that I can use with rdbms?  MySQLdb defines these constants in human-readable form, so it would be great if rdbms had the same constant names, so that I wouldn't have to worry about coding for two type mappings.

Can anyone help?  Thanks!