Tuesday, May 31, 2022

[google-cloud-sql-discuss] Re: Postgres storage 16.5TB with DB size under 1TB

Hi, Miroslav, 


Can you please share  the output of the queries below?

1) select pg_size_pretty(sum(size)) as "Total WAL disk usage" from pg_ls_waldir();

2) select pg_size_pretty(sum(size)) as "Total WAL disk usage" from pg_ls_waldir() where name not like '%.backup';

3) select * from pg_ls_waldir() order by modification Asc;



On Monday, May 30, 2022 at 11:39:19 AM UTC-5 miro...@carted.com wrote:
Hi,

Got a disk usage on Postgres indicating we are utilising 16.5 TB of data, however running various table size queries (https://wiki.postgresql.org/wiki/Disk_Usage) does not show any significant disk usage amounting to that amount? 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/6d4d7266-f2eb-4f27-9600-fb8b462d718cn%40googlegroups.com.

[google-cloud-sql-discuss] Re: wpdatatable and Google Cloud SQL connection

Hi Meryl,

Can you provide us with more information on how you are connecting (an example or code snippet might help)?

Are you using public IP? Are you using the Cloud SQL proxy? Are you using SSL/TLS certificates?

Does the IP address of your new host have an IPV4 address? Or is is possibly coming from an IPV6 ip? You should be able to verify the public ip of the host with `curl icanhaszip.com`. 

On Tuesday, May 31, 2022 at 12:14:47 PM UTC-6 me...@theteam.net.au wrote:
Hi, 

I have using Google Cloud SQL to connect to our wpdatatables. Over the weekend, our website had to change its host provider and now the SQL connection is being refused despite us whitelisting the new IP address + also having the 0.0.0.0/0 IP address in the white list. 

The issue that appears is basically this: 
wpDataTables could not connect to mysql server. mysql said: There was a problem with your SQL connection - Connection refused

I have entered the correct credentials, etc. Not sure what I'm doing wrong. 

Can anybody help?
Thanks

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/3cae3b91-d32e-49a7-9342-1472579e95efn%40googlegroups.com.

Re: [google-cloud-sql-discuss] Postgres storage 16.5TB with DB size under 1TB

Do you have point in time recovery on? That will do it...

Peter 

On Mon, May 30, 2022, 10:39 AM Miroslav Kosteckij <miroslav@carted.com> wrote:
Hi,

Got a disk usage on Postgres indicating we are utilising 16.5 TB of data, however running various table size queries (https://wiki.postgresql.org/wiki/Disk_Usage) does not show any significant disk usage amounting to that amount? 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/3a241566-002d-4352-9ff7-84e0cd73e28cn%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/CAC%2B%3DFfb3vWRTLw3wPUvm3Vxck%2BvW%3DvDXO_Q5U63AtareY3D7Qw%40mail.gmail.com.

[google-cloud-sql-discuss] Re: Share Cloud SQL DB between 2 projects with app engine

Ok
 solution found. 

Just need to add service account of ProjectB to IAM of ProjectA

Le lundi 30 mai 2022 à 18:39:19 UTC+2, - - a écrit :
Hi, 
I have a postgres Cloud SQL DB instance in my project A, it works perfectly with my django project on App Engine of project A. 

Now I have developed a project B and I would like to share the DB instance with project B. 
I put the same host as project A in the settings file of project B like:

/cloudsql/project-285409:europe-west4:project-A-DB

but when I deploy project B on app engine, django keeps having connection error 

could not connect to server: Connection refused   
Is the server running locally and accepting connections on Unix domain socket "/cloudsql/project-285409:europe-west4:project-A-DB.s.PGSQL.5432"?

I have enabled Cloud SQL API

Did I miss some settings or it is not possible to share DB between 2 projects?

Thank you







--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/1cdf81e1-6635-4378-a342-95bb8b4f27d0n%40googlegroups.com.

Monday, May 30, 2022

[google-cloud-sql-discuss] wpdatatable and Google Cloud SQL connection

Hi, 

I have using Google Cloud SQL to connect to our wpdatatables. Over the weekend, our website had to change its host provider and now the SQL connection is being refused despite us whitelisting the new IP address + also having the 0.0.0.0/0 IP address in the white list. 

The issue that appears is basically this: 
wpDataTables could not connect to mysql server. mysql said: There was a problem with your SQL connection - Connection refused

I have entered the correct credentials, etc. Not sure what I'm doing wrong. 

Can anybody help?
Thanks

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/4481d413-a2a7-4c7b-8e06-67ad70d94cban%40googlegroups.com.

[google-cloud-sql-discuss] Import of data to GCP Database is failing with error

Getting this error  when trying to Import data to GCP Database is failing with error - 

Multiple databases detected in BAK file. Only importing a single database is supported

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/fddf1f07-bb23-45d3-a9e7-717a4a320217n%40googlegroups.com.

[google-cloud-sql-discuss] Postgres storage 16.5TB with DB size under 1TB

Hi,

Got a disk usage on Postgres indicating we are utilising 16.5 TB of data, however running various table size queries (https://wiki.postgresql.org/wiki/Disk_Usage) does not show any significant disk usage amounting to that amount? 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/3a241566-002d-4352-9ff7-84e0cd73e28cn%40googlegroups.com.

Sunday, May 29, 2022

[google-cloud-sql-discuss] Share Cloud SQL DB between 2 projects with app engine

Hi, 
I have a postgres Cloud SQL DB instance in my project A, it works perfectly with my django project on App Engine of project A. 

Now I have developed a project B and I would like to share the DB instance with project B. 
I put the same host as project A in the settings file of project B like:

/cloudsql/project-285409:europe-west4:project-A-DB

but when I deploy project B on app engine, django keeps having connection error 

could not connect to server: Connection refused   
Is the server running locally and accepting connections on Unix domain socket "/cloudsql/project-285409:europe-west4:project-A-DB.s.PGSQL.5432"?

I have enabled Cloud SQL API

Did I miss some settings or it is not possible to share DB between 2 projects?

Thank you







--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/676ac7d5-6004-4f67-96bf-3cff8a419fc8n%40googlegroups.com.

Saturday, May 28, 2022

[google-cloud-sql-discuss] Re: CloudIAP with CloudSQL/Redis and Private Connect

is there any update by IAP team? can we configure IAP with Cloud SQL with private IP.

On Thursday, April 30, 2020 at 7:23:32 PM UTC+5:30 Olu wrote:
As indicated on this IAP Overview documentation[1], IAP may be used with applications running on App Engine standard environment, App Engine flexible environment, Compute Engine, and GKE. The Cloud IAP cannot be configured with CloudSQL or Redis at the moment. A feature request was submitted with the IAP team for this implementation but there is no ETA for such implementations at this time. 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/d7fb4986-dd87-43f2-8548-61451b890b4dn%40googlegroups.com.

Thursday, May 26, 2022

[google-cloud-sql-discuss] Re: No fuss strategy to migrate a CloudSQL PostgreSQL database to another GCP account, using replication and zero loss and downtime

Hi, alexhguerra,

Users sometimes want to migrate their (normal) relational database with "zero" downtime. While downtime can be reduced, migration cannot be done without any impact on applications (that is, zero downtime). Replication causes replication lag.

The instant the decision is made to "migrate" all applications from one replica to another, applications (and therefore customers) have to wait (that is, downtime) at least as long as the "replication lag" before using the new database. In practice, the downtime is a few orders of magnitude higher (minutes to hours) because:

* Database queries can take multiple seconds to complete and in flight queries must be completed or aborted at the time of migration.

* The database has to be "warmed up" if it has substantial buffer memory - common in large databases.

* If database shards have duplicate tables, some writes may need to be paused while the shards are being migrated.

* Applications must be stopped at source and restarted in GCP and connection to the GCP database instance must be established.

* Network routes to the applications must be rerouted. Based on how DNS entries are set up, this can take some time.


All of these can be reduced with some planning and "cost" (some operations not permitted for some time before/after migration).

More about: https://cloud.google.com/architecture/database-migration-concepts-principles-part-1?hl=en



On Wednesday, May 25, 2022 at 11:02:07 AM UTC-5 alexh...@gmail.com wrote:
Hello

We need to migrate a CloudSQL PostgreSQL 12 database from one GCP account to another, and currently there's no automatic / 'cloudy' solution offered by Google to perform this.
Its a critical online database that cant stop for a multitude of reasons

Im looking to implement the following process.

0- Setup primary as replica source, and increase WAL files retention time
1- take a PITR backup of primary
2- restore the PITR backup as a new database server
3- Make the necessary changes on the primary and standby to activate stream replication
4- Use standby as read replica to test the application for a short period of time to test if its allright.
5- switch off the primary and activate the app on the newly promoted standby to primary

May i ask if someone have a step by step procedure as reference for this?
Also, if choosing logical replication , is it possible to keep the PITR in the same way?

Thanks
Alexandre

A CloudSQL PostgreSQL database replicated to another GCP account, initially thru PITR restore, then using stream replication and zero downtime for switchover/takeover manually by the second one. 
Once the standby is up to date with the primary (LSN, and so on), it will take over as primary (or even standalone) , then to shut down the application, and point it to the standby now as primary 
I could see that i could use pglog

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/36030f3e-33e8-4ea2-a0cd-14ec1bdb6accn%40googlegroups.com.

Tuesday, May 24, 2022

[google-cloud-sql-discuss] No fuss strategy to migrate a CloudSQL PostgreSQL database to another GCP account, using replication and zero loss and downtime

Hello

We need to migrate a CloudSQL PostgreSQL 12 database from one GCP account to another, and currently there's no automatic / 'cloudy' solution offered by Google to perform this.
Its a critical online database that cant stop for a multitude of reasons

Im looking to implement the following process.

0- Setup primary as replica source, and increase WAL files retention time
1- take a PITR backup of primary
2- restore the PITR backup as a new database server
3- Make the necessary changes on the primary and standby to activate stream replication
4- Use standby as read replica to test the application for a short period of time to test if its allright.
5- switch off the primary and activate the app on the newly promoted standby to primary

May i ask if someone have a step by step procedure as reference for this?
Also, if choosing logical replication , is it possible to keep the PITR in the same way?

Thanks
Alexandre

A CloudSQL PostgreSQL database replicated to another GCP account, initially thru PITR restore, then using stream replication and zero downtime for switchover/takeover manually by the second one. 
Once the standby is up to date with the primary (LSN, and so on), it will take over as primary (or even standalone) , then to shut down the application, and point it to the standby now as primary 
I could see that i could use pglog

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/ca217fd0-0ac7-4c20-b961-e5ff67caa8e8n%40googlegroups.com.

Thursday, May 19, 2022

[google-cloud-sql-discuss] Re: Connection from Cloud Run to Cloud SQL instance timed out after 10s

The weird thing is that the error mentions MySQL (port 3307) where we only have Postgres. We don't use MySQL at all. Also, it seems like someone else experienced this recently: https://www.googlecloudcommunity.com/gc/Serverless/Cloud-Run-connection-to-Cloud-SQL-times-out-occassionally/m-p/424255/highlight/true#M355.
On Thursday, May 19, 2022 at 12:05:30 AM UTC+3 Agis Anastasopoulos wrote:
Hello.

We're deploying some Go 1.17 services to Cloud Run. They connect to our Cloud SQL (Postgres 13) instance using the instance's public IP address (i.e. using the SQL Proxy).

Recently, we've been observing frequent errors like this from our Cloud Run logs:

    Cloud SQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: connection to Cloud SQL instance at <public-ip>:3307 failed: timed out after 10s

This manifests in users receiving HTTP 500 responses to their requests.

I've went through the database's CPU/Memory/Disk utilization graphs and it's totally under-utilized. No suspicious warning/error logs from the database either. So it all seems good there.

Our containers use the lib/pq Postgres driver, and we set a statement_timeout = 1s to all of our connections. Our containers are configured so that there's at minimum 1 container up and running. The utilization of containers is also extremely low when this happens.

Any insights here?

After searching, I bumped onto this https://stackoverflow.com/a/27476968/1242778, talking about tweaking the TCP keepalive settings in the service running on Cloud Run.

Any help would be greatly appreciated,
Thanks in advance


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/cc7a8dcd-e93d-4259-a9ca-01fe3187b334n%40googlegroups.com.

Wednesday, May 18, 2022

[google-cloud-sql-discuss] Re: Connection from Cloud Run to Cloud SQL instance timed out after 10s

I've enabled `log_connections` in our Postgres server, but I see nothing out of the ordinary. Just normal INFO-level messages about clients connecting and authenticating.

What stands out to me, is that the error is about MySQL (port 3307) instead of Postgres (port 5432).
On Thursday, May 19, 2022 at 12:05:30 AM UTC+3 Agis Anastasopoulos wrote:
Hello.

We're deploying some Go 1.17 services to Cloud Run. They connect to our Cloud SQL (Postgres 13) instance using the instance's public IP address (i.e. using the SQL Proxy).

Recently, we've been observing frequent errors like this from our Cloud Run logs:

    Cloud SQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: connection to Cloud SQL instance at <public-ip>:3307 failed: timed out after 10s

This manifests in users receiving HTTP 500 responses to their requests.

I've went through the database's CPU/Memory/Disk utilization graphs and it's totally under-utilized. No suspicious warning/error logs from the database either. So it all seems good there.

Our containers use the lib/pq Postgres driver, and we set a statement_timeout = 1s to all of our connections. Our containers are configured so that there's at minimum 1 container up and running. The utilization of containers is also extremely low when this happens.

Any insights here?

After searching, I bumped onto this https://stackoverflow.com/a/27476968/1242778, talking about tweaking the TCP keepalive settings in the service running on Cloud Run.

Any help would be greatly appreciated,
Thanks in advance


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/5860f830-8427-4ab1-93a8-13af033f43a4n%40googlegroups.com.

[google-cloud-sql-discuss] Re: Upgrade GCP hosted PostgreSQL while keeping existing user passwords

It seems that there wasn't a way to do this at the time I asked the question, but things have moved on since then.

Google Cloud SQL now provides in-place major version upgrades, which will keep the existing user passwords.

https://cloud.google.com/sql/docs/postgres/upgrade-major-db-version-inplace

On Wednesday, September 1, 2021 at 8:37:09 AM UTC+1 john m wrote:
I have been tasked with upgrading our existing GCP hosted PostgreSQL databases from version 9.6 to version 13.

say that we need to create a new database and then
"Make sure the target instance has... The same user accounts, with the same PostgreSQL privileges and passwords"

We have a number of databases with dozens of users and roles, we allow users to connect directly and set their own passwords.
We would like to perform the database upgrade without changing all the passwords.

I've looked at using pg_dumpall to copy the users to a new database, but it fails as I don't have permission to read pg_authid.
It seems that we need a superuser account to be able to read pg_authid, and GCP does not permit superuser accounts.

Is there any way to upgrade the databases and keep the existing passwords?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/d8f40293-909c-402d-a658-5e222250ab7cn%40googlegroups.com.

[google-cloud-sql-discuss] Connection from Cloud Run to Cloud SQL instance timed out after 10s

Hello.

We're deploying some Go 1.17 services to Cloud Run. They connect to our Cloud SQL (Postgres 13) instance using the instance's public IP address (i.e. using the SQL Proxy).

Recently, we've been observing frequent errors like this from our Cloud Run logs:

    Cloud SQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: connection to Cloud SQL instance at <public-ip>:3307 failed: timed out after 10s

This manifests in users receiving HTTP 500 responses to their requests.

I've went through the database's CPU/Memory/Disk utilization graphs and it's totally under-utilized. No suspicious warning/error logs from the database either. So it all seems good there.

Our containers use the lib/pq Postgres driver, and we set a statement_timeout = 1s to all of our connections. Our containers are configured so that there's at minimum 1 container up and running. The utilization of containers is also extremely low when this happens.

Any insights here?

After searching, I bumped onto this https://stackoverflow.com/a/27476968/1242778, talking about tweaking the TCP keepalive settings in the service running on Cloud Run.

Any help would be greatly appreciated,
Thanks in advance


--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/86d254c2-7d94-4a34-ad6b-3c648518cae3n%40googlegroups.com.

Monday, May 16, 2022

[google-cloud-sql-discuss] Re: Recommended cloud sql postgresql client settings

Do you see an improvement if you use the default connection amount limits, according to what is recommended from the documentation? The documentation also describes this error being caused by connection timeouts, have you found better results by increasing the connection life?

On Friday, May 13, 2022 at 6:31:53 PM UTC-5 r...@transcarent.ai wrote:
Yes.

We defined the instance creation in terraform as:

resource "google_sql_database_instance" "postgres" {
  name = "prefix-postgres"
  database_version = "POSTGRES_13"
  deletion_protection = true

  settings {
    tier = "db-custom-4-15360"

    disk_autoresize = true
    disk_size = 200
    disk_type = "PD_SSD"

    activation_policy = "ALWAYS"
    availability_type = "REGIONAL"

    backup_configuration {
      enabled = true
      start_time = "07:00"
      location = "us"
      transaction_log_retention_days = 7
      point_in_time_recovery_enabled = true
      backup_retention_settings {
        retained_backups = 365
        retention_unit = "COUNT"
      }
    }

    database_flags {
      name = "cloudsql.enable_pgaudit"
      value = "on"
    }

    database_flags {
      name = "pgaudit.log_parameter"
      value = "on"
    }

    database_flags {
      name = "cloudsql.logical_decoding"
      value = "on"
    }

    ip_configuration {
      ipv4_enabled = false
      dynamic "authorized_networks" {
      }
        require_ssl = true
    }

    maintenance_window {
      day = 7
      hour = 8
    }
  }

  lifecycle {
    ignore_changes = [
      settings[0].disk_size
    ]
  }
}

On Thursday, May 12, 2022 at 4:50:07 PM UTC-4 fiescocasasola wrote:

Did you follow Google's documentation to setup the postgresql instance? If you use Google's documentation, can you share it please?  The recommended settings are always part of Google's documentation. 


Here is a google document to create a  PostgreSQL instance [1]


[1]:https://cloud.google.com/sql/docs/postgres/create-instance

On Wednesday, May 11, 2022 at 4:26:36 PM UTC-5 r...@transcarent.ai wrote:
We have a postgresql v13 instance where we get occasionally "connection reset by peer" errors.

We are not using the Cloud SQL Auth proxy.

Code is in golang and using pgx driver with database/sql Interface.

For each client, we have a max connection of 20, a max idle connection of 5, and max connection life time of 5 minutes.

We are deploying multiple clients with these settings. Looking in in the graph of active connections, we have a max of 500 and in the last week have a max active connection of 14.

Are there some recommend settings that we should be using to prevent us from getting these 
 occasional "connection reset by peer" errors?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/0b59b5eb-0d79-4f5c-ba9e-751ad63b6e95n%40googlegroups.com.

Thursday, May 12, 2022

[google-cloud-sql-discuss] Re: Recommended cloud sql postgresql client settings

Yes.

We defined the instance creation in terraform as:

resource "google_sql_database_instance" "postgres" {
  name = "prefix-postgres"
  database_version = "POSTGRES_13"
  deletion_protection = true

  settings {
    tier = "db-custom-4-15360"

    disk_autoresize = true
    disk_size = 200
    disk_type = "PD_SSD"

    activation_policy = "ALWAYS"
    availability_type = "REGIONAL"

    backup_configuration {
      enabled = true
      start_time = "07:00"
      location = "us"
      transaction_log_retention_days = 7
      point_in_time_recovery_enabled = true
      backup_retention_settings {
        retained_backups = 365
        retention_unit = "COUNT"
      }
    }

    database_flags {
      name = "cloudsql.enable_pgaudit"
      value = "on"
    }

    database_flags {
      name = "pgaudit.log_parameter"
      value = "on"
    }

    database_flags {
      name = "cloudsql.logical_decoding"
      value = "on"
    }

    ip_configuration {
      ipv4_enabled = false
      dynamic "authorized_networks" {
      }
      private_network = resource.google_compute_network.default.id
        require_ssl = true
    }

    maintenance_window {
      day = 7
      hour = 8
    }
  }

  lifecycle {
    ignore_changes = [
      settings[0].disk_size
    ]
  }
}

On Thursday, May 12, 2022 at 4:50:07 PM UTC-4 fiescocasasola wrote:

Did you follow Google's documentation to setup the postgresql instance? If you use Google's documentation, can you share it please?  The recommended settings are always part of Google's documentation. 


Here is a google document to create a  PostgreSQL instance [1]


[1]:https://cloud.google.com/sql/docs/postgres/create-instance

On Wednesday, May 11, 2022 at 4:26:36 PM UTC-5 r...@transcarent.ai wrote:
We have a postgresql v13 instance where we get occasionally "connection reset by peer" errors.

We are not using the Cloud SQL Auth proxy.

Code is in golang and using pgx driver with database/sql Interface.

For each client, we have a max connection of 20, a max idle connection of 5, and max connection life time of 5 minutes.

We are deploying multiple clients with these settings. Looking in in the graph of active connections, we have a max of 500 and in the last week have a max active connection of 14.

Are there some recommend settings that we should be using to prevent us from getting these 
 occasional "connection reset by peer" errors?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/cc933714-8a08-4f69-b6fd-366a2275429dn%40googlegroups.com.

[google-cloud-sql-discuss] Re: Recommended cloud sql postgresql client settings

Did you follow Google's documentation to setup the postgresql instance? If you use Google's documentation, can you share it please?  The recommended settings are always part of Google's documentation. 


Here is a google document to create a  PostgreSQL instance [1]


[1]:https://cloud.google.com/sql/docs/postgres/create-instance

On Wednesday, May 11, 2022 at 4:26:36 PM UTC-5 r...@transcarent.ai wrote:
We have a postgresql v13 instance where we get occasionally "connection reset by peer" errors.

We are not using the Cloud SQL Auth proxy.

Code is in golang and using pgx driver with database/sql Interface.

For each client, we have a max connection of 20, a max idle connection of 5, and max connection life time of 5 minutes.

We are deploying multiple clients with these settings. Looking in in the graph of active connections, we have a max of 500 and in the last week have a max active connection of 14.

Are there some recommend settings that we should be using to prevent us from getting these 
 occasional "connection reset by peer" errors?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/a663055f-97ab-4d6a-994c-3f0532f3821dn%40googlegroups.com.

[google-cloud-sql-discuss] Cloud SQL Postgres upgrade failure

Hi, I have Cloud SQL Postgres instances on version 11. I would like to upgrade them "in-place" to version 12. However I am not able to do that because of old version of postgis (currently 2.5.5). When trying to upgrade this extension (using `SELECT postgis_extensions_upgrade();` ), I'm getting error - "ERROR:  permission denied for table pg_operator". As far as I see, postgres user doesn't have privileges to access pg_operator. Only cloudsqladmin (which is a superuser) has privileges on pg_operator - but using cloud sql I am unable to get psql as cloudsqladmin.

How can I upgrade postgis in that case?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/871523db-b8a7-4048-b9ff-64d26464c16bn%40googlegroups.com.

Wednesday, May 11, 2022

[google-cloud-sql-discuss] Re: ENOENT connecting to a mySQL instance using node.js

Did you use any documentation to come up with the code? If so, could you please post the documentation you used? Could you also provide more information to be able to reproduce your project? 

On Monday, May 9, 2022 at 3:56:28 PM UTC-5 giovann...@oraigo.com wrote:
Hi
Im having trouble connecting via service (which is running in cloud run) to my db instance (mysql). I search almost everywhere online but none of the provided solutions worked for me. 
The error happend when i try to query the db , here a code snippet:

ABOUT THE POOL INSTANCE
const pool = mysql.createPool({
 user:'username',
 password:'password',
 database:'dbname',
 socketpath:'the instance name given by the instance info page',
});

HOW IM TRYING TO QUERY IT
app.get("/:ATT", async (req, res)=>{
 const query = "SELECT * FROM tabelnameWHERE attribute=?";
 pool.query(query, [req.params.ATT], (error, results)=>{
      if(error){
             res.json(ERROR);
      } else{
             res.json({status:"done !"});
     });
 });

the ERROR returned me the following:
{"errno":-2,"code":"ENOENT","syscall":"connect","address":"/cloudsql/ the instance name given by the instance info page  ","fatal":true}

the instance name im using im sure 100% is the right one

Anyone can help?
I can't get out of it
Thanks in advance. 

PS: i already tried to use the same region for all the projects part but nothing has changed
i tried also to add at the end of the instance name "s.PGSQL.5432", which shuold be completly useless, since is for postresql  i assume (but i was completly lost so i gave it a try anyway)

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/34ba663e-5f7b-4b5c-99cb-e23bb6731387n%40googlegroups.com.

[google-cloud-sql-discuss] Recommended cloud sql postgresql client settings

We have a postgresql v13 instance where we get occasionally "connection reset by peer" errors.

We are not using the Cloud SQL Auth proxy.

Code is in golang and using pgx driver with database/sql Interface.

For each client, we have a max connection of 20, a max idle connection of 5, and max connection life time of 5 minutes.

We are deploying multiple clients with these settings. Looking in in the graph of active connections, we have a max of 500 and in the last week have a max active connection of 14.

Are there some recommend settings that we should be using to prevent us from getting these 
 occasional "connection reset by peer" errors?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/72353ac0-0fd7-463a-ab6f-27a568d5e2f1n%40googlegroups.com.

[google-cloud-sql-discuss] Re: How to setup replication from external on premise sql server to Cloud Sql Server

For SQL server 2014, it seems you would have to rely on migrating through a backup and restore. As shown in the documentation, Database Migration Service is not compatible yet with SQL server (a private preview request form is available in that documentation page). In addition, the Cloud SQL current replication documentation for SQL server only supports replicating between Cloud SQL instances. 

You can also follow this guide for migrating your SQL server database which contains a step by step process


On Friday, May 6, 2022 at 4:22:04 PM UTC-5 lillia...@gmail.com wrote:
Hello,

I am working on a plan to migrate data from an external on premise sql server 2014 enterprise edition to Google Cloud SQL Server using replication.   Is this possible?

I searched the Google Doc, I only see instructions for MySQL replication but not Sql Server.  Is Sql server replication supported?  If not, is full backup/restore the only option?

Any feedback will be appreciated.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/a0be8439-a716-449e-8bff-e57475ba9c64n%40googlegroups.com.

Monday, May 9, 2022

[google-cloud-sql-discuss] ENOENT connecting to a mySQL instance using node.js

Hi
Im having trouble connecting via service (which is running in cloud run) to my db instance (mysql). I search almost everywhere online but none of the provided solutions worked for me. 
The error happend when i try to query the db , here a code snippet:

ABOUT THE POOL INSTANCE
const pool = mysql.createPool({
 user:'username',
 password:'password',
 database:'dbname',
 socketpath:'the instance name given by the instance info page',
});

HOW IM TRYING TO QUERY IT
app.get("/:ATT", async (req, res)=>{
 const query = "SELECT * FROM tabelnameWHERE attribute=?";
 pool.query(query, [req.params.ATT], (error, results)=>{
      if(error){
             res.json(ERROR);
      } else{
             res.json({status:"done !"});
     });
 });

the ERROR returned me the following:
{"errno":-2,"code":"ENOENT","syscall":"connect","address":"/cloudsql/ the instance name given by the instance info page  ","fatal":true}

the instance name im using im sure 100% is the right one

Anyone can help?
I can't get out of it
Thanks in advance. 

PS: i already tried to use the same region for all the projects part but nothing has changed
i tried also to add at the end of the instance name "s.PGSQL.5432", which shuold be completly useless, since is for postresql  i assume (but i was completly lost so i gave it a try anyway)

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/77a94f8f-1730-4c1b-86e9-8ae1c2ed2f19n%40googlegroups.com.

Thursday, May 5, 2022

[google-cloud-sql-discuss] How to setup replication from external on premise sql server to Cloud Sql Server

Hello,

I am working on a plan to migrate data from an external on premise sql server 2014 enterprise edition to Google Cloud SQL Server using replication.   Is this possible?

I searched the Google Doc, I only see instructions for MySQL replication but not Sql Server.  Is Sql server replication supported?  If not, is full backup/restore the only option?

Any feedback will be appreciated.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/bcd1c1e2-4089-499b-93f8-b2dae6acdb11n%40googlegroups.com.