Thursday, August 29, 2019

[google-cloud-sql-discuss] Re: best way to Export Cloud SQL database

Note that the flag "--skip-lock-tables" is not currently supported by Cloud SQL. A feature request was filed and you can follow up on the progress of this request here. The supported flags are listed here.

On Thursday, August 29, 2019 at 6:48:47 PM UTC-4, chen levi wrote:
Hi George,

thanks for your response.
i did read pretty carefully the mentioned documentation but nothing is mentioned there about the actual locking thing.

as mentioned the final command i've used was mysqldump db table --quick --skip-lock-tables --single-transaction | gzip > /tmp/mytable.gz
as you can see we do mention the DB and the specific table we want to export.
i've also used the other flags to make the export faster , but it wasnt clear that the table will be locked afterall (i've checked it with show open tables).

anyway when i've switched now to gcloud sql export the export operation took few minutes and completed successfully for the entire DB so seems like this approach is better for us.
(probabaly cause the export resides on gcloud, while on the mysqldump approach is on remote server).


10x
On Thursday, August 29, 2019 at 12:22:58 AM UTC+3, George (Cloud Platform Support) wrote:
Hello Chen, 

You should refer to the "Best practices for importing and exporting data" documentation page, of value in itself, but also for the link to Exporting data for import into Cloud SQL, with a highly significant sub-chapter on "Exporting without views using mysqldump" sub-chapter. You must use the --databases option to specify an explicit list of databases to import, and this list must not contain the mysql system database.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/1fef7ee8-3e8a-4222-b411-cf66db4ae216%40googlegroups.com.

[google-cloud-sql-discuss] Re: best way to Export Cloud SQL database

Hi George,

thanks for your response.
i did read pretty carefully the mentioned documentation but nothing is mentioned there about the actual locking thing.

as mentioned the final command i've used was mysqldump db table --quick --skip-lock-tables --single-transaction | gzip > /tmp/mytable.gz
as you can see we do mention the DB and the specific table we want to export.
i've also used the other flags to make the export faster , but it wasnt clear that the table will be locked afterall (i've checked it with show open tables).

anyway when i've switched now to gcloud sql export the export operation took few minutes and completed successfully for the entire DB so seems like this approach is better for us.
(probabaly cause the export resides on gcloud, while on the mysqldump approach is on remote server).


10x
On Thursday, August 29, 2019 at 12:22:58 AM UTC+3, George (Cloud Platform Support) wrote:
Hello Chen, 

You should refer to the "Best practices for importing and exporting data" documentation page, of value in itself, but also for the link to Exporting data for import into Cloud SQL, with a highly significant sub-chapter on "Exporting without views using mysqldump" sub-chapter. You must use the --databases option to specify an explicit list of databases to import, and this list must not contain the mysql system database.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/7b63aaeb-5979-4c02-9523-b1bd616e05d7%40googlegroups.com.

Wednesday, August 28, 2019

[google-cloud-sql-discuss] Re: best way to Export Cloud SQL database

Hello Chen, 

You should refer to the "Best practices for importing and exporting data" documentation page, of value in itself, but also for the link to Exporting data for import into Cloud SQL, with a highly significant sub-chapter on "Exporting without views using mysqldump" sub-chapter. You must use the --databases option to specify an explicit list of databases to import, and this list must not contain the mysql system database.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/cee09b61-31e7-40bd-a8c9-80d0e04a2a1e%40googlegroups.com.

Tuesday, August 27, 2019

[google-cloud-sql-discuss] best way to Export Cloud SQL database

Hi

i'm trying to find the best way to export Cloud sql database from one project and import it to another.
i have 2 cloud sql instances on 2 different projects (prod/staging).

in order to sync prod to staging database i'm using mysqldump command to export each needed table from prod DB and then import the output to the staging db.
i'm doing so using the following command:
mysqldump db table--skip-lock-tables --single-transaction | gzip > /tmp/mytable.gz

once finished exporting all tables i'm using mysqldump to import the gz files into the staging db.

the problem is , that sometimes (too often) i'm getting one of two issues:

- i'm getting error on "mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table mytable at row: xxxxxx"
- export just hang and takes way too much time.

i see it happens on big tables, so at first i thought its probably related to net_read_timeout and net_write_timeout and which had default value (30/60) so i've tried to increased them to 900, and even 3600.
the second issue still happens.

also tried to add the --quick option which supposed to be better for large tables as it doesnt load the entire content to memory, and do it row by row. still hangs.

after investigating this issue on the net i've started to think that i'm not using mysqldump the right way.
- seems like skip-lock-tables is useless here (innodb) as the single-transaction will make sure the dump will happen in one transaction. right?
- although using the single-transaction, still it means that during that time of dump, which as i saw sometimes takes alot of time, i'm blocking my production database from operations such as create/alter table etc??
- what is the different of using mysqldump then gcloud sql export sql ? is it better suited for this kind of operation?

thanks a lot
Chen

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/aa007a21-7787-43c8-9d87-111dfc700a0a%40googlegroups.com.

[google-cloud-sql-discuss] Corrupted GCE Disk

One of our GCE instance went offline today, and we failed to reboot that.
From the serial port console, we realized the disk seems like corrupted (refer to attached).

How can that happen on a cloud environment? Is there anyway we can fix this? 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/713617fb-f739-4275-b412-dd48cdbceb36%40googlegroups.com.

Sunday, August 25, 2019

[google-cloud-sql-discuss] Re: Enable logical replication on Postgres instance

Logical replication is not supported for Cloud SQL Postgres. The Cloud SQL product team is aware of this feature request and I recommend you follow up on their progress by following the issue tracker.

On Sunday, August 25, 2019 at 4:51:25 PM UTC-4, Tu Nguyen Huu wrote:
I want to enable logical replication on Postgres instance to stream data change to Apache Kafka. Is there anyway to do this?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/72bb58a1-e786-44fb-b5c2-0f8077fdc346%40googlegroups.com.

Saturday, August 24, 2019

[google-cloud-sql-discuss] Enable logical replication on Postgres instance

I want to enable logical replication on Postgres instance to stream data change to Apache Kafka. Is there anyway to do this?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/816f2d4f-77a6-42ee-aa2d-ebd490c4a2ff%40googlegroups.com.

Friday, August 23, 2019

Re: [google-cloud-sql-discuss] Re: Cloud MySQL Free Memory

MySQL will use as much memory for caching as it can. If MySQL needs memory, cached objects will be evicted.  Trying to manage MySQL memory is not a good idea. Pick an instance size to meet your performance requirements.

Again, I don't understand what problem you are trying to solve.

Where did you come to believe that MySQL queries will fail if the cache is full? Why do you think forcing a cache flush will improve anything? MySQL will (usually) be happy to use the disk albeit at lower performance than memory. If a query requires more physical memory than the instance has, then your only real option is to scale up. Flushing the cache will not achieve this.

On 8/23/2019 8:10:29 PM, Devin Homan <devin@flyntlok.com> wrote:

John,

If a SQL operation requires more free memory than is available, the operation will fail but the memory will be freed.  The memory usage does ideally sit at around 80%, but I've had it go above 90% and then go back down to 20%.  The number of connections isn't an issue and the connections are pooled; many of the connections being closed is just what I've had happen when the memory goes back down. I'm guessing this is a behavior that Google has implemented behind the scenes as I cannot find documentation of it, and that the system does this because there is no swap space/virtual memory and running out of memory can cause operating system problems.   It would be nice if I could have a nightly process that triggered this behavior so that there weren't tables cached in memory that may not be needed the next day.   Restarting the database has the same effect, plus more, with more disruption, obviously.  I would think that
FLUSH TABLES

would have this effect, but it does not.  There may be more standard ways of getting MySQL to give-up the memory but it seems that there is already a behind-the-scenes feature for Cloud MySQL to get it to do that.

On Friday, August 23, 2019 at 6:13:48 PM UTC-8, Devin Homan wrote:
Is it possible to tell MySQL to free-up memory?  When the memory usage gets to a certain point, say 95% used,  Google Cloud MySQL will automatically free the memory and close most connections. Is there any way to manually trigger this behavior?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/b56b3ff9-6eb2-4c49-803d-fc8e3fba970e%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/6e44617c-5b6d-48f3-911c-cddf9e07f77c%40getmailbird.com.

[google-cloud-sql-discuss] Re: Cloud MySQL Free Memory

John,

If a SQL operation requires more free memory than is available, the operation will fail but the memory will be freed.  The memory usage does ideally sit at around 80%, but I've had it go above 90% and then go back down to 20%.  The number of connections isn't an issue and the connections are pooled; many of the connections being closed is just what I've had happen when the memory goes back down. I'm guessing this is a behavior that Google has implemented behind the scenes as I cannot find documentation of it, and that the system does this because there is no swap space/virtual memory and running out of memory can cause operating system problems.   It would be nice if I could have a nightly process that triggered this behavior so that there weren't tables cached in memory that may not be needed the next day.   Restarting the database has the same effect, plus more, with more disruption, obviously.  I would think that
FLUSH TABLES

would have this effect, but it does not.  There may be more standard ways of getting MySQL to give-up the memory but it seems that there is already a behind-the-scenes feature for Cloud MySQL to get it to do that.

On Friday, August 23, 2019 at 6:13:48 PM UTC-8, Devin Homan wrote:
Is it possible to tell MySQL to free-up memory?  When the memory usage gets to a certain point, say 95% used,  Google Cloud MySQL will automatically free the memory and close most connections. Is there any way to manually trigger this behavior?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/b56b3ff9-6eb2-4c49-803d-fc8e3fba970e%40googlegroups.com.

Re: [google-cloud-sql-discuss] Cloud MySQL Free Memory

Devin,

Why is memory usage a concern on Cloud SQL? The design of MySQL is to use as much memory as possible for caching (around 80%).

If you have a connection count issue, the correct approach is to 1) fix connection leaks in your code 2) use connect pooling.

On 8/23/2019 7:13:50 PM, Devin Homan <devin@flyntlok.com> wrote:

Is it possible to tell MySQL to free-up memory?  When the memory usage gets to a certain point, say 95% used,  Google Cloud MySQL will automatically free the memory and close most connections. Is there any way to manually trigger this behavior?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/178a475b-f525-4428-91d7-6ec39cbc9099%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/debcf9f1-4223-4a0b-b7be-01db99b1625b%40getmailbird.com.

[google-cloud-sql-discuss] Cloud MySQL Free Memory

Is it possible to tell MySQL to free-up memory?  When the memory usage gets to a certain point, say 95% used,  Google Cloud MySQL will automatically free the memory and close most connections. Is there any way to manually trigger this behavior?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/178a475b-f525-4428-91d7-6ec39cbc9099%40googlegroups.com.

Thursday, August 22, 2019

[google-cloud-sql-discuss] Re: Trying to understand the I/O performance of our current Oracle DB in performance lab and compare it with our GCP Oracle’s I/O

To my understanding you have an Oracle database set up on a VM within the Google Cloud Platform. I would recommend you consider installing Stackdriver Monitoring agent which gathers metrics from the VM instances and third party applications and sends that information to Monitoring. Read more here.

It seems that the metric you are looking for is the `io_time`. The default Stackdriver monitoring monitors Disk I/O (amongst other metrics), but not disk usage. Therefore, you may consider the `compute` metrics for disk usage metrics.

You could then use the Google API Explorer to try and generate an API call to retrieve the information you require.

Once you have this information, there is some useful information here and here which may be helpful in constructing the API call in Google API explorer. If you choose this method, Stackdriver monitoring only records metrics at intervals, so if you want real time data it may not be suitable.

On Thursday, August 22, 2019 at 12:14:28 PM UTC-4, kenny wrote:
We're trying to make sure the GCP's I/O performance is at least as good as our lab's so when we find issues it won't be from the GCP's I/O. 

Any help/pointers would be greatly appreciated.

 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/1c67816a-f7be-4753-9a67-23339e73ac8d%40googlegroups.com.

[google-cloud-sql-discuss] Trying to understand the I/O performance of our current Oracle DB in performance lab and compare it with our GCP Oracle’s I/O

We're trying to make sure the GCP's I/O performance is at least as good as our lab's so when we find issues it won't be from the GCP's I/O. 

Any help/pointers would be greatly appreciated.

 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/ef8ebcf9-66d2-4ea0-8828-ab73b00fbb5c%40googlegroups.com.

[google-cloud-sql-discuss] Trying to understand the I/O performance of our current Oracle DB in performance lab and compare it with our GCP Oracle’s I/O

We're trying to make sure the GCP's I/O performance is at least as good as our lab's so when we find issues it won't be from the GCP's I/O. 

Wanted to know how do we achieve this, can someone kindly help me with this.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/7918a186-3d53-4369-a4d5-72476260b041%40googlegroups.com.

Wednesday, August 21, 2019

[google-cloud-sql-discuss] Re: Is there a fast way to export MySQL data from AWS RDS and import it into Google CloudSQL?

Hello Jacob, 

You may consider following related Cloud SQL documentation: "Importing data into Cloud SQL". This discussion group is oriented more towards general opinions, trends, and issues of general nature touching Cloud SQL. For coding and program architecture, as well as exporting and importing MySQL data, you may be better served in dedicated forums such as stackoverflow, where experienced programmers are within reach and ready to help.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/de4c2f3c-1a29-4ce7-9185-7d559e04bb15%40googlegroups.com.

Re: [google-cloud-sql-discuss] Re: Major Google Cloud SQL issue - SQL Cloud unresponsive

Hi George,

Thank you for the update. There wasn't (and there still isn't) any mention of this issue on the google cloud dashboard. Would you mind pasting a link here mentioning this issue?

Cheers,
-Hugues

On Wed, Aug 21, 2019 at 3:17 PM 'George (Cloud Platform Support)' via Google Cloud SQL discuss <google-cloud-sql-discuss@googlegroups.com> wrote:
Hello Hugues, 

You are right, something on Google's end was not going as expected, which explains your connectivity issues on 7 August. The issue has been fixed meanwhile. You may check Cloud Status Dashboard for such occurrences any time. 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/bf5d68c0-f672-46c2-9b08-78a514aa33e8%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/CAL_utwZCCoBVZ7WHBZhrdhz9VTYiSNXEWuSV7c2H30feb98_EA%40mail.gmail.com.

[google-cloud-sql-discuss] Re: Major Google Cloud SQL issue - SQL Cloud unresponsive

Hello Hugues, 

You are right, something on Google's end was not going as expected, which explains your connectivity issues on 7 August. The issue has been fixed meanwhile. You may check Cloud Status Dashboard for such occurrences any time. 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/bf5d68c0-f672-46c2-9b08-78a514aa33e8%40googlegroups.com.

Tuesday, August 20, 2019

[google-cloud-sql-discuss] Major Google Cloud SQL issue - SQL Cloud unresponsive

Hi all,

Here at Betabrand, we have been facing a major issue that started occurring around Aug. 7th and is currently happening and taking our application down.

Our Google Cloud SQL suddenly stops responding to connections requests.

It seems to have been reported Aug. 9th on the Google Cloud SQL github account Aug. 8th: https://github.com/GoogleCloudPlatform/cloudsql-proxy/issues/297

Something on google's end is wrong.

We are seeing a lot of Throttling refreshCfg error from Google Cloud SQL Proxy in our logs and many connection error in our application.

Please advise ASAP.


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/CAL_utwYfuxRD3KuVT7W%3DJFt-ZAjn-C30qieQ0%2ByBRShLtBXXZQ%40mail.gmail.com.

[google-cloud-sql-discuss] Is there a fast way to export MySQL data from AWS RDS and import it into Google CloudSQL?

I have a 885 GB MySQL 5.6 database running in Amazon RDS. I'd like to move it into Google's CloudSQL service. To do so I'm taking the following steps:

Following Amazon's instructions for moving a database out of RDS (since Google seems to require GTID for replication and RDS does not supported GTID for MySQL 5.6).

  1. 1. Created a RDS read-replica.
  2. 2. Once the read-replica was up to date with the master I stopped replication, recorded the binlog location, and dumped the database to a file.
  3. 3. Brought up an EC2 instance running Ubuntu and MySQL 5.6 and I'm importing the dump file into the EC2 database.

The problem I'm having is the import of the dump file into the EC2 database is taking much longer than I had hoped. After about 3 and half days the EC2 instance is only about 60% done with the database load.

The mysqldump command I ran was based off Amazon's recommendation...


mysqldump -h RdsInstanceEndpoint \      -u user \      -p password \      --port=3306 \      --single-transaction \      --routines \      --triggers \      --databases  database database2 \      --compress  \      --compact > dumpfile.sql.gz


I decompressed the dumpfile and to import the data I am simply running...


mysql -u user -p password < dumpfile.sql


Is there anything I can do to make this process run faster? Are there any command line options I should be using that I am not?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/4a24f023-cac8-4efc-b1a2-f5cb0c81ef65%40googlegroups.com.

Monday, August 12, 2019

[google-cloud-sql-discuss] Re: cloud SQL MYSQL database

The Federated Storage engine feature is not supported in Cloud SQL. You may find related information in the "Unsupported features" paragraph on the "Cloud SQL for MySQL features" documentation page

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/f2472088-5ead-4a01-9ef1-abd5606c03e5%40googlegroups.com.

Sunday, August 11, 2019

[google-cloud-sql-discuss] cloud SQL MYSQL database

How to add cloud SQL database federated storage engine

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/a361d7aa-df42-41fb-a0f3-704e13c4622b%40googlegroups.com.

Thursday, August 8, 2019

[google-cloud-sql-discuss] Re: Create a cloud function to export Cloud SQL databases from another project

You can access Second Generation MySQL instances as well as PostgreSQL instances in other projects if your Cloud Function's service account (listed on the Cloud Function's General tab in the GCP Console) is added as a member in IAM on the project with the Cloud SQL instance(s) with the Cloud SQL Client role. You may read related details on the "Connecting to Cloud SQL" documentation page.

Code-wise, there is no special arrangement to access buckets in other projects, as bucket names are globally unique. You still need to grant appropriate GCS permissions to your service account.  

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/80d7408e-4cf3-4912-89f8-055407d5615d%40googlegroups.com.

Re: [google-cloud-sql-discuss] Create a cloud function to export Cloud SQL databases from another project

Hi you can do this but you need to add rights to the app engine service account for your cloud function to each of the projects with a cloud sql instance you want to work with. I think the role is cloud sql user

On Thu, Aug 8, 2019, 12:58 PM David Oceans <david.oceans@gmail.com> wrote:
Hi!

I have a cloud function created in the same project where is my cloud sql instance and is working, I'm able to export data to GCS bucket.

But I would like have all my cloud functions centralize in one project.

The question is, Can I have the cloud function in another project and be able to do export of one cloud sql instance that is in different project?
How can I do it? because I copied the code of my cloud function in another project and the execution seems work but doesn't create the bucket in the GCS

This is posible?

Thanks

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/42153720-6fa8-4793-a98e-4e58450a00d8%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/CAOTThP%3DVBype8RBR3qcvcM3xCgdN0X%2B_SoJKHZ_c12ZODZKpKQ%40mail.gmail.com.

[google-cloud-sql-discuss] Re: HikariCP cloudSqlInstance property not set

Hi Anket - I've responded to your github issue on the subject: https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory/issues/154

On Thursday, August 8, 2019 at 2:12:52 AM UTC-7, Aniket Bhadane wrote:
I wish to connect to Google Cloud SQL using JDBC SocketFactory with HikariCP in a Spring 4 application.

The dependencies in pom.xml are:

 <dependency>
   
<groupId>org.postgresql</groupId>
   
<artifactId>postgresql</artifactId>
   
<version>42.1.1</version>
 
</dependency>
 
<dependency>
   
<groupId>com.google.cloud.sql</groupId>
   
<artifactId>postgres-socket-factory</artifactId>
   
<version>1.0.14</version>
 
</dependency>
 
<dependency>
   
<groupId>com.zaxxer</groupId>
   
<artifactId>HikariCP</artifactId>
   
<version>3.3.1</version>
 
</dependency>


The applicationContext.xml contains:

 <bean id="hikariConfig" class="com.zaxxer.hikari.HikariConfig">
     
<property name="poolName" value="springHikariCP" />
     
<property name="connectionTestQuery" value="SELECT 1" />
     
<property name="dataSourceClassName" value="org.postgresql.ds.PGSimpleDataSource" />
     
<property name="maximumPoolSize" value="10" />
     
<property name="idleTimeout" value="30000" />
 
 
<property name="dataSourceProperties">
         
<props>
             
<prop key="url">jdbc:postgresql://google/mydb?cloudSqlInstance=projectId:region:instance&amp;socketFactory=com.google.cloud.sql.postgres.SocketFactory</prop>
             
<prop key="user">postgres</prop>
             
<prop key="password">password</prop>
         
</props>
     
</property>
 
</bean>


But when I run the application, I get the following exception:

Caused by: java.lang.IllegalArgumentException: cloudSqlInstance property not set. Please specify this property in the JDBC URL or the connection Properties with value in form "proj
ect:region:instance"
        at com.google.common.base.Preconditions.checkArgument(Preconditions.java:135)
        at com.google.cloud.sql.core.CoreSocketFactory.connect(CoreSocketFactory.java:170)
        at com.google.cloud.sql.postgres.SocketFactory.createSocket(SocketFactory.java:72)
        at org.postgresql.core.PGStream.<init>(PGStream.java:60)
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:144)
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
        at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:194)
        at org.postgresql.Driver.makeConnection(Driver.java:450)
        at org.postgresql.Driver.access$100(Driver.java:60)
        at org.postgresql.Driver$ConnectThread.run(Driver.java:360)
        ... 1 common frames omitted


What could be going wrong here?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/ec6cf2ad-2a23-434b-95ff-ee860b8ed4cb%40googlegroups.com.

[google-cloud-sql-discuss] Create a cloud function to export Cloud SQL databases from another project

Hi!

I have a cloud function created in the same project where is my cloud sql instance and is working, I'm able to export data to GCS bucket.

But I would like have all my cloud functions centralize in one project.

The question is, Can I have the cloud function in another project and be able to do export of one cloud sql instance that is in different project?
How can I do it? because I copied the code of my cloud function in another project and the execution seems work but doesn't create the bucket in the GCS

This is posible?

Thanks

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/42153720-6fa8-4793-a98e-4e58450a00d8%40googlegroups.com.

[google-cloud-sql-discuss] HikariCP cloudSqlInstance property not set

I wish to connect to Google Cloud SQL using JDBC SocketFactory with HikariCP in a Spring 4 application.

The dependencies in pom.xml are:

 <dependency>
   
<groupId>org.postgresql</groupId>
   
<artifactId>postgresql</artifactId>
   
<version>42.1.1</version>
 
</dependency>
 
<dependency>
   
<groupId>com.google.cloud.sql</groupId>
   
<artifactId>postgres-socket-factory</artifactId>
   
<version>1.0.14</version>
 
</dependency>
 
<dependency>
   
<groupId>com.zaxxer</groupId>
   
<artifactId>HikariCP</artifactId>
   
<version>3.3.1</version>
 
</dependency>


The applicationContext.xml contains:

 <bean id="hikariConfig" class="com.zaxxer.hikari.HikariConfig">
     
<property name="poolName" value="springHikariCP" />
     
<property name="connectionTestQuery" value="SELECT 1" />
     
<property name="dataSourceClassName" value="org.postgresql.ds.PGSimpleDataSource" />
     
<property name="maximumPoolSize" value="10" />
     
<property name="idleTimeout" value="30000" />
 
 
<property name="dataSourceProperties">
         
<props>
             
<prop key="url">jdbc:postgresql://google/mydb?cloudSqlInstance=projectId:region:instance&amp;socketFactory=com.google.cloud.sql.postgres.SocketFactory</prop>
             
<prop key="user">postgres</prop>
             
<prop key="password">password</prop>
         
</props>
     
</property>
 
</bean>


But when I run the application, I get the following exception:

Caused by: java.lang.IllegalArgumentException: cloudSqlInstance property not set. Please specify this property in the JDBC URL or the connection Properties with value in form "proj
ect:region:instance"
        at com.google.common.base.Preconditions.checkArgument(Preconditions.java:135)
        at com.google.cloud.sql.core.CoreSocketFactory.connect(CoreSocketFactory.java:170)
        at com.google.cloud.sql.postgres.SocketFactory.createSocket(SocketFactory.java:72)
        at org.postgresql.core.PGStream.<init>(PGStream.java:60)
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:144)
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
        at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:194)
        at org.postgresql.Driver.makeConnection(Driver.java:450)
        at org.postgresql.Driver.access$100(Driver.java:60)
        at org.postgresql.Driver$ConnectThread.run(Driver.java:360)
        ... 1 common frames omitted


What could be going wrong here?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/d1682078-f0a2-4762-a103-beaef273146f%40googlegroups.com.

Monday, August 5, 2019

Re: [google-cloud-sql-discuss] Re: WordPress changes to support Google Cloud SQL SSL

Hi Elliott,

They already know. If you want contact me directly.

John

On 8/5/2019 2:32:49 PM, 'Nicolas (Google Cloud Platform Support)' via Google Cloud SQL discuss <google-cloud-sql-discuss@googlegroups.com> wrote:

Hi John,

Can I ask you in which communication were you asked to open this thread? I'm asking so we can effectively bring it to the rightful team.

Thanks in advance!



On Monday, August 5, 2019 at 4:00:38 PM UTC-4, John Hanley wrote:
Hi Elliott,

Google (@GCPCloud) asked me to post here so that Google could follow up internally.

On Friday, August 2, 2019 at 1:34:35 PM UTC-7, Elliott (Google Cloud Platform Support) wrote:

Hello,


Please note that Google Groups are reserved for general Google Cloud Platform-end product discussions and not for technical issues, which is why I suggest moving the troubleshooting to Stackoverflow to obtain assistance from our programming community for your question.


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/79fd775e-1699-4b99-8451-00705da366b1%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/8860c617-65fb-4383-9140-10ef459c588e%40getmailbird.com.

[google-cloud-sql-discuss] Re: WordPress changes to support Google Cloud SQL SSL

Hi John,

Can I ask you in which communication were you asked to open this thread? I'm asking so we can effectively bring it to the rightful team.

Thanks in advance!



On Monday, August 5, 2019 at 4:00:38 PM UTC-4, John Hanley wrote:
Hi Elliott,

Google (@GCPCloud) asked me to post here so that Google could follow up internally.

On Friday, August 2, 2019 at 1:34:35 PM UTC-7, Elliott (Google Cloud Platform Support) wrote:

Hello,


Please note that Google Groups are reserved for general Google Cloud Platform-end product discussions and not for technical issues, which is why I suggest moving the troubleshooting to Stackoverflow to obtain assistance from our programming community for your question.


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/79fd775e-1699-4b99-8451-00705da366b1%40googlegroups.com.