Friday, August 31, 2018

[google-cloud-sql-discuss] Re: Backups - what if we need to retrieve an older version of a specific schema?

Cloud SQL backups will allow you to recover a lost data or recover  from a problem with your instance. You can schedule automatic backups which are pre-scheduled backups which are created at specific intervals or on-demand backups which you can create at any time. If you schedule an automatic or complete an on-demand backup prior to your customer's record deletion you can recover it using these methods. 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/cce5a3a4-d91b-4678-abb0-857a21cf5bad%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Cloud SQL reliability

On a normal Compute instance, I have to use persistent storage to keep my data safe and replicated as the local ssd is ephimeral.
What about Cloud SQL ? Are data stored in database replicated and kept safe automatically?

What happens in case of instance crashes, node failure and so on?
Some of my apps doesn't require a MySQL replica, thus I don't need HA and read replicas (exactly like using a single VPS with MySQL on it).

But is Google able to restore database automatically in case of crashes?

In other words, are Cloud SQL instances stored on a RAID or something similiar ?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/bf3a1221-1308-4b56-ba8c-1c217754098e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Wednesday, August 29, 2018

Re: [google-cloud-sql-discuss] Re: Replicate to CloudSQL via pglogical

Thanks for update.

On Wed, Aug 29, 2018 at 5:47 PM 'Katayoon (Cloud Platform Support)' via Google Cloud SQL discuss <google-cloud-sql-discuss@googlegroups.com> wrote:
Cloud SQL for PostgreSQL does not support replication from an external master or external replicas for Cloud SQL instances at the moment.

I should note that the Cloud SQL product team are aware of this feature request, however we cannot provide you any ETA for the implementation. You may star the feature request for any update.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/89a6b7da-5f16-46de-b45f-ff31f6cc787d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/CABVnwxBgF1B5rJWWFskER0fu0iq0sTaectwbx%2BrBPZUVpQekwQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: Replicate to CloudSQL via pglogical

Cloud SQL for PostgreSQL does not support replication from an external master or external replicas for Cloud SQL instances at the moment.

I should note that the Cloud SQL product team are aware of this feature request, however we cannot provide you any ETA for the implementation. You may star the feature request for any update.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/89a6b7da-5f16-46de-b45f-ff31f6cc787d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: CloudSQL Proxy error: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: dial tcp: i/o timeout

Yes - it actually did.  Thanks.

On Friday, August 24, 2018 at 10:44:35 AM UTC-5, Pia Chamberlain wrote:
Thanks for the heads up, Sean. I'll make sure that feedback gets back to the Cloud SQL doc team.

Would a pointer here have helped?
https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes#service_account

On Thursday, August 23, 2018 at 9:48:14 AM UTC-7, Sean Dowd wrote:
I ended up creating a new cluster that has sql in it's scope list (the other one did not).  Connections worked immediately (with the existing token).  If this is the answer, the documentation should probably point to this somewhere. I would have hoped that the cloudsql proxy would have a more descriptive error message.

I followed the list here (your link[5]):
  • Enable the Cloud SQL API
  • Install the proxy
  • Create a service account
  • Start the proxy
  • Start the mysql session
but still could not connect.  Re-creating a cluster is really not a good solution (just re-creating the node pool did not work).  It also seems that the inability to update scope is a shortcoming in GCP/GKE.  Also note, link [6] following the kubernetes track, the documentation does not mention creating the cluster with the sql scope.



On Thursday, August 23, 2018 at 11:24:56 AM UTC-5, Sam (Google Cloud Support) wrote:

Have you tried refreshing the access token? Some OAuth 2.0 flows require using refresh tokens to acquire new access tokens as they have limited lifetimes to enhance security [1]. A refresh token will allow your application access Cloud SQL beyond the access token's lifetime [2].


Based on the error message, you can have a look at these documentations about troubleshooting Cloud SQL connection issues [3][4]. Then by means of ensuring proper configuration I would follow the guide in the fifth and sixth links [5][6]. The last link is an answer on StackOverflow that I found [7]. Hope this helps.


[1] https://cloud.google.com/storage/docs/json_api/v1/how-tos/authorizing#OAuth2Authorizing

[2] https://cloud.google.com/storage/docs/json_api/v1/how-tos/authorizing#OAuth2Authorizing

[3] https://cloud.google.com/sql/docs/mysql/diagnose-issues

[4] https://cloud.google.com/sql/faq#connections

[5] https://cloud.google.com/sql/docs/mysql/connect-admin-proxy

[6] https://cloud.google.com/sql/docs/mysql/sql-proxy

[7] https://stackoverflow.com/questions/5755819/lost-connection-to-mysql-server-at-reading-initial-communication-packet-syste

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/b440da96-f5cb-4cee-8d51-0fe65b3326d1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: GCP SQL is not erasing bin logs ??


Thanks a lot .. super helpfull



El lunes, 27 de agosto de 2018, 13:11:54 (UTC-5), Sam (Google Cloud Support) escribió:
Additionally, there is another thread that provides insight on how manual backups deletion impact binary log deletion [1].
Hope all these help.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/964d933b-b72f-42c6-a574-96269b92096f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Replicate to CloudSQL via pglogical

Customer have PostgreSQL on GKE and want to Replicate to CloudSQL via pglogical .
Do you know if CloudSQL will work with PGLogical?

Thanks in advance for any comments.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/345a0fea-9ff1-4616-8553-3388e9d89687%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tuesday, August 28, 2018

[google-cloud-sql-discuss] Re: Unable to make change SQL Parameter Value

Hello Prashant,


Currently the max_connections parameter isn't in the list of settable flags for as can be seen in these documents MySQL Supported flags and PostgreSQL Supported flags. In addition, you should review this closed feature request Expose max_connections as configurable PostgreSQL flag, as stated by the Cloud SQL Engineering Team: "The number of concurrent connections can affect the stability of a Cloud SQL instance. Specifically, each connection requires memory allocation. Allowing too many connections can contribute to OOM events, which trigger reboots (unexpected downtime).  As a managed service, Cloud SQL take any parameter that can affect instance stability seriously. Cloud SQL will potentially increase these limits in the future, but does not plan to make max_connections directly configurable.". I recommend you to review the following document as well, where you will find information about the current fixed connection limits for Cloud SQL for MySQL and PostgreSQL Max connections fixed limits.


On Tuesday, August 28, 2018 at 11:02:50 AM UTC-4, Prashant Dave wrote:
Hi,

I am trying to change the global value of max_connections parameter. I am making change using root id. However, while changing the parameter value I am getting error following error. Find the attached screen shot of error message for your reference. Kindly let me know, how to change the value of this parameter?

Access denied; you need (at least one of) the SUPER privilege(s) for this operation


Regards
Prashant Dave




--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/83d7ea76-b59a-40aa-8944-a76bde186a59%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Unable to make change SQL Parameter Value

Hi,

I am trying to change the global value of max_connections parameter. I am making change using root id. However, while changing the parameter value I am getting error following error. Find the attached screen shot of error message for your reference. Kindly let me know, how to change the value of this parameter?

Access denied; you need (at least one of) the SUPER privilege(s) for this operation


Regards
Prashant Dave




--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/1b011f2c-89c7-4e40-8364-cb3b45241131%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Monday, August 27, 2018

[google-cloud-sql-discuss] Re: GCP SQL is not erasing bin logs ??

Additionally, there is another thread that provides insight on how manual backups deletion impact binary log deletion [1].
Hope all these help.

[1] https://groups.google.com/forum/#!topic/google-cloud-sql-discuss/KKwSpmQ85-k

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/c3c1e53e-fb89-4d5d-a216-1e4095e53156%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: GCP SQL is not erasing bin logs ??

Binary logging is used for some operations such as clone and replica creation [1][2], and it is useful for recovering an instance to a specific point-in-time [3]. For First Generation instances, the space used by binary logs counts against the total storage used by the instance. For Second Generation instances, binary logs are charged at the regular storage rate. And when you disable binary logging, all the existing binary logs are deleted [4].


As binary logs use storage space and are automatically deleted with their associated automatic backup [5]. You should check to see if you have setup *automatic backup* or *on-demand backup* by Navigating to your Cloud SQL instance from your GCP Cloud Console to view information about it [6]. From there you can *edit* the instance settings [7][8].


Do you have a First Generation or Second Generation Instance? The most recent 7 automated backups, and *all on-demand backups, are retained* for Second Generation instances [9]. So if you have on-demand backup configured instead of automated backups then this could be the reason why you are seeing binary logs beyond 7 days. If the size of the binary logs is an issue to your instance, then you can delete the binary logs by disabling and then reenabling binary logging [4].


Note that you cannot manually delete binary logs, nor change the 7-day time period on automated backups. So try disabling binary logging if you want to delete existing binary logs.


To change the 'expire_logs_days' you need to have configured an external replica on a Cloud SQL 2nd Generation instance [10], and to purge those binary logs you need to have Super privileges. There is an internal feature request being evaluated to allow manual purging on binary logs. If you'd like to show your support for this you can do so in our Public Issue Tracker tool [11]. This carries the benefit of allowing you to interact with the engineering team directly regarding this functionality being implemented and also allows other users who would like to see this to show their support by 'starring' the issue.


[1] https://cloud.google.com/sql/docs/mysql/backup-recovery/backups#what_backups_provide

[2] https://cloud.google.com/sql/docs/mysql/replication/tips#bin-log-impact

[3] https://cloud.google.com/sql/docs/mysql/backup-recovery/restoring#pitr

[4] https://cloud.google.com/sql/docs/mysql/backup-recovery/restore#about_enabling_binary_logging

[5] https://cloud.google.com/sql/docs/mysql/instance-info#available_metrics

[6] https://cloud.google.com/sql/docs/mysql/instance-info#info

[7] https://cloud.google.com/sql/docs/mysql/edit-instance

[8] https://cloud.google.com/sql/docs/mysql/instance-settings

[9] https://cloud.google.com/sql/faq#backup-and-recovery

[10] https://cloud.google.com/sql/docs/mysql/replication/configure-external-replica#configuring_the_external_replica

[11] https://issuetracker.google.com/35904375


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/df80cadc-ea9c-4373-993e-d7c3f4fa5a1f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Saturday, August 25, 2018

[google-cloud-sql-discuss] Re: Output DataPrep job to PostGres

Errrr, so If I kicked off a DataFlow job, and it's been running for over 13 hours...should I just kill it?  I did the same job with a smaller data set and it completed in 20 minutes...now with a larger data set it's taken about 14 hours and it's still not there.

On Wednesday, August 22, 2018 at 3:55:34 PM UTC-4, Jordan (Cloud Platform Support) wrote:
You can find examples on Stack Exchange of writing to a Postgres database via the Java JdbcIO transform. If you are not using Java than it is recommended to post an official feature request with the Apache Beam team in their Issue Tracker. 

As a workaround you can always use the TextIO transform to write the data to Google Cloud Storage in something like a .csv file. Then setup a trigger that would run a simple function in Google Cloud Functions that would read the file and write it to your Postgres db. 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/564ab5c4-08ea-4c18-9163-d16696c83800%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Friday, August 24, 2018

[google-cloud-sql-discuss] GCP SQL is not erasing bin logs ??


Hi

It is known that binary logs are erased after 7 days retension and you can not change this behaviour.

But one month ago, my storage is growing steadily ...

$  SHOW BINARY LOGS ;

It shows that i have 4376 files .. it is far beyong what I beleive it should.

I tryied to erase them, to change the expire_logs_days that i found in 0, but the platform does not allow to do that.

And my storage still growing

custos.png


And my database is not in mainteniance mode and general logs are off

Thanks a lot

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/48258f74-0e6c-47a9-b805-82048e9b42ad%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Backups - what if we need to retrieve an older version of a specific schema?

An important question was just brought to my attention

Given a software-as-a-service model, with each customer having their own schema on the MySQL server,

Suppose a customer deleted some records by mistake, and needs us to recover the records from a version of their schema from, say, two months earlier. What do we need to have in place, in order to make that happen, and what facilities does Google have for that purpose.

--
JHHL

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/71555007-9c17-42b4-83c1-f9623c7db4a6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: CloudSQL Proxy error: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: dial tcp: i/o timeout

Thanks for the heads up, Sean. I'll make sure that feedback gets back to the Cloud SQL doc team.

Would a pointer here have helped?
https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes#service_account

On Thursday, August 23, 2018 at 9:48:14 AM UTC-7, Sean Dowd wrote:
I ended up creating a new cluster that has sql in it's scope list (the other one did not).  Connections worked immediately (with the existing token).  If this is the answer, the documentation should probably point to this somewhere. I would have hoped that the cloudsql proxy would have a more descriptive error message.

I followed the list here (your link[5]):
  • Enable the Cloud SQL API
  • Install the proxy
  • Create a service account
  • Start the proxy
  • Start the mysql session
but still could not connect.  Re-creating a cluster is really not a good solution (just re-creating the node pool did not work).  It also seems that the inability to update scope is a shortcoming in GCP/GKE.  Also note, link [6] following the kubernetes track, the documentation does not mention creating the cluster with the sql scope.



On Thursday, August 23, 2018 at 11:24:56 AM UTC-5, Sam (Google Cloud Support) wrote:

Have you tried refreshing the access token? Some OAuth 2.0 flows require using refresh tokens to acquire new access tokens as they have limited lifetimes to enhance security [1]. A refresh token will allow your application access Cloud SQL beyond the access token's lifetime [2].


Based on the error message, you can have a look at these documentations about troubleshooting Cloud SQL connection issues [3][4]. Then by means of ensuring proper configuration I would follow the guide in the fifth and sixth links [5][6]. The last link is an answer on StackOverflow that I found [7]. Hope this helps.


[1] https://cloud.google.com/storage/docs/json_api/v1/how-tos/authorizing#OAuth2Authorizing

[2] https://cloud.google.com/storage/docs/json_api/v1/how-tos/authorizing#OAuth2Authorizing

[3] https://cloud.google.com/sql/docs/mysql/diagnose-issues

[4] https://cloud.google.com/sql/faq#connections

[5] https://cloud.google.com/sql/docs/mysql/connect-admin-proxy

[6] https://cloud.google.com/sql/docs/mysql/sql-proxy

[7] https://stackoverflow.com/questions/5755819/lost-connection-to-mysql-server-at-reading-initial-communication-packet-syste

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/ac243802-0fad-4c09-9b65-f47385a6c3e6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thursday, August 23, 2018

[google-cloud-sql-discuss] Re: CloudSQL Proxy error: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: dial tcp: i/o timeout

I ended up creating a new cluster that has sql in it's scope list (the other one did not).  Connections worked immediately (with the existing token).  If this is the answer, the documentation should probably point to this somewhere. I would have hoped that the cloudsql proxy would have a more descriptive error message.

I followed the list here (your link[5]):
  • Enable the Cloud SQL API
  • Install the proxy
  • Create a service account
  • Start the proxy
  • Start the mysql session
but still could not connect.  Re-creating a cluster is really not a good solution (just re-creating the node pool did not work).  It also seems that the inability to update scope is a shortcoming in GCP/GKE.  Also note, link [6] following the kubernetes track, the documentation does not mention creating the cluster with the sql scope.



On Thursday, August 23, 2018 at 11:24:56 AM UTC-5, Sam (Google Cloud Support) wrote:

Have you tried refreshing the access token? Some OAuth 2.0 flows require using refresh tokens to acquire new access tokens as they have limited lifetimes to enhance security [1]. A refresh token will allow your application access Cloud SQL beyond the access token's lifetime [2].


Based on the error message, you can have a look at these documentations about troubleshooting Cloud SQL connection issues [3][4]. Then by means of ensuring proper configuration I would follow the guide in the fifth and sixth links [5][6]. The last link is an answer on StackOverflow that I found [7]. Hope this helps.


[1] https://cloud.google.com/storage/docs/json_api/v1/how-tos/authorizing#OAuth2Authorizing

[2] https://cloud.google.com/storage/docs/json_api/v1/how-tos/authorizing#OAuth2Authorizing

[3] https://cloud.google.com/sql/docs/mysql/diagnose-issues

[4] https://cloud.google.com/sql/faq#connections

[5] https://cloud.google.com/sql/docs/mysql/connect-admin-proxy

[6] https://cloud.google.com/sql/docs/mysql/sql-proxy

[7] https://stackoverflow.com/questions/5755819/lost-connection-to-mysql-server-at-reading-initial-communication-packet-syste

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/79a14219-262a-4940-a987-dec31c7eb233%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: CloudSQL Proxy error: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: dial tcp: i/o timeout

Have you tried refreshing the access token? Some OAuth 2.0 flows require using refresh tokens to acquire new access tokens as they have limited lifetimes to enhance security [1]. A refresh token will allow your application access Cloud SQL beyond the access token's lifetime [2].


Based on the error message, you can have a look at these documentations about troubleshooting Cloud SQL connection issues [3][4]. Then by means of ensuring proper configuration I would follow the guide in the fifth and sixth links [5][6]. The last link is an answer on StackOverflow that I found [7]. Hope this helps.


[1] https://cloud.google.com/storage/docs/json_api/v1/how-tos/authorizing#OAuth2Authorizing

[2] https://cloud.google.com/storage/docs/json_api/v1/how-tos/authorizing#OAuth2Authorizing

[3] https://cloud.google.com/sql/docs/mysql/diagnose-issues

[4] https://cloud.google.com/sql/faq#connections

[5] https://cloud.google.com/sql/docs/mysql/connect-admin-proxy

[6] https://cloud.google.com/sql/docs/mysql/sql-proxy

[7] https://stackoverflow.com/questions/5755819/lost-connection-to-mysql-server-at-reading-initial-communication-packet-syste

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/33fe921b-f50a-43e2-a671-5f046a2057c3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Wednesday, August 22, 2018

[google-cloud-sql-discuss] Re: Are there alternatives for CloudSQL PostgreSQL logical replication for now?

I couldn't find any hook or setup for CDC from Cloud SQL to Cloud Dataflow, however you may create a feature request so that the Cloud SQL product team may evaluate it.


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/2bb89692-0e35-4a6e-ab8f-e87c7d6e63b4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: Output DataPrep job to PostGres

You can find examples on Stack Exchange of writing to a Postgres database via the Java JdbcIO transform. If you are not using Java than it is recommended to post an official feature request with the Apache Beam team in their Issue Tracker. 

As a workaround you can always use the TextIO transform to write the data to Google Cloud Storage in something like a .csv file. Then setup a trigger that would run a simple function in Google Cloud Functions that would read the file and write it to your Postgres db. 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/e4d929f5-9511-4e1d-8912-62db4274708e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] CloudSQL Proxy error: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: dial tcp: i/o timeout

We have just migrated from a trial GCP account to a "real" one and are now unable to connect to our pre-existing MySQL CLoudSQL instance from GKE.

We have a pod (in a deployment) that has a cloudsql proxy container and a wordpress one (which I've replaced with a simple mysql container, running a while-true-sleep loop so we can exec in a test a command line mysql connection).  The client errors off with:

ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 107

And the cloudsql proxy log shows this:

2018/08/22 15:41:57 New connection for "PROJECT:us-central1:MYSQL_INSTANCE_ID"
2018/08/22 15:42:27 couldn't connect to "PROJECT:us-central1:MYSQL_INSTANCE_ID": Post https://www.googleapis.com/sql/v1beta4/projects/PROJECT/instances/MYSQL_INSTANCE_ID/createEphemeral?alt=json: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: dial tcp: i/o timeout

Note the 30 seconds elapsed time.

The cloudsql-proxy is invoked with:

        command: ["/cloud_sql_proxy"]
        args: ["-instances=$(MYSQL_INSTANCE)",
               "-credential_file=/secrets/cloudsql/XXX-mysql-proxy-access.json",
               "-verbose=true"]

Where mysql-proxy-access.json contains the JSON credentials of a service account assigned the Cloud SQL Client role and $MYSQL_INSTANCE is PROJECT:us-central1:MYSQL_INSTANCE_ID=tcp:3306

I've checked to ensure that the contents of XXX-mysql-proxy-access.json match the key on the service account.

The cluster nodes are on v1.10.6-gke.1 and we are using the latest cloudsql proxy image (gcr.io/cloudsql-docker/gce-proxy:1.11).  We tried making the mysql instance reside in the same zone as the nodes (us-central1b) but nothing changed.


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/8a4eafb6-71ff-4abb-b2e8-18053fe5bba8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] How do i know why I am using so much Storage ?


Hi

I have a set of databases that are about 64G in size but my Cloud SQL Storage Usage shows around 768 Gb and growing.

I have also a failover database and it only shows 84Gb.

My Question is:  How do i know why I am using so much Storage ?  I know it can be backups or binlogs but there is no place to check that out.

Could you help me ?

Cheers


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/b4c13afe-a8cf-4b79-b836-439449338731%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Output DataPrep job to PostGres

I'm running jobs using DataPrep and DataFlow. I write out to BigQuery at the moment, my teams plans are to move from our postgres DB to BigQuery once we outgrow the former.

However, DataPrep and DataFlow is far superior to producing our desired output. We are currently writing the output to CSV in a storage bucket rather then running code that chunks the rows into Postgres from our local machines.

My question, feature request is, when can I write the DataPrep and DataFlow outputs directly into my Postgres DB? Would be ideal. Even when we move to BigQuery, our postgres implementation still Satisfies business cases on the cheap. So we won't move off it completely.

Thanks for reading


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/41b16a04-3a9f-4975-aff1-b04270eee114%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tuesday, August 21, 2018

[google-cloud-sql-discuss] Re: Lots of "An unknown error occurred" on automatic backups in operations logs

Hi Hussain,


I recommend to follow up the private report you have already created in the Issue Tracker.


Note that, Google Groups are reserved for general Google Cloud Platform-end product discussions and not for reporting issues.


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/527e9bb6-717e-44bc-b505-7a02e3dfa4e9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[google-cloud-sql-discuss] Re: Are there alternatives for CloudSQL PostgreSQL logical replication for now?

@Katayoon - regarding Dataflow - is it possible to connect Dataflow to read CDCs from CloudSQL? Is there any other technology or setup that can serve to publish changes from CloudSQL to subscribers? I've been looking for any cookbook recipes or mention but i haven't found anything

Thanks.

On Thursday, 21 June 2018 15:30:04 UTC+1, Katayoon (Cloud Platform Support) wrote:

Yes, currently Cloud SQL for PostgreSQL does not support replication from an external master or external replicas for Cloud SQL instances. I should note that the Cloud SQL product team are working on this feature request and it is planned to be available soon, however we cannot provide you any ETA for the implementation.


Another storage option can be using Cloud Spanner which is ideal for relational, structured data that requires transactional reads and writes.


Applying Dataflow option depends on the amount/frequency of your transactions. So, I recommend that you contact Google Cloud Platform sales team and discuss your project with them. If you have a Premium support package, the architecture advisory service is available for you as well.


I should note that Google Groups are reserved for general product discussions. We cannot provide you any advice on your system's architecture.


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/2cc88a8d-f9c1-4b4f-b2c9-8ceed54c230d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Monday, August 20, 2018

Re: [google-cloud-sql-discuss] Re: How do I install MS SQL Server 2012 in my exisiting WIndows 2016 VM instance

Hello Juan,

Please be advised, As this platform is meant for general questions about the platform, and for further technical help on your issue, I would advice you to open a Public Issue Tracker so that we can look in to it and help you recover your machine. 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/945637a9-8c71-477a-a0d4-17370b3698d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Re: [google-cloud-sql-discuss] Re: Unable to connect cloud SQL postgres

Hello Binaya, 

The error you refer to is specific to SQLAlchemy; you'd be at an advantage contacting SQLAlchemy support. You may have a look at "psycopg2 throws an exception in sqlalchemy when updating to 2.4.0" posting in GitHub. 

A possible solution is provided in reply to the "Can't connect the postgreSQL with psycopg2" question in stackoverflow. 

This discussion group is oriented more towards general opinions, trends and issues of general nature touching the app engine. For coding and programming architecture, you may be better served in a forum such as stackoverflow, where experienced programmers are within reach and ready to help. 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/d6414927-af99-49c4-997c-3a374c37941e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.