Wednesday, March 31, 2021

[google-cloud-sql-discuss] Re: Can't Add Storage Capacity of PostgreSQL Cloud SQL

I experienced the same issue. For me the trick was to do it with the gcloud cli instead. That worked as a charm:
gcloud sql instances patch <instance-name> --storage-size=250

Good luck!
On Monday, 29 March 2021 at 11:45:17 UTC+1 muhamma...@qasir.id wrote:
Hi,

When I try to add storage capacity of my PostgreSQL instance from 200 GB to 250 GB, I don't get any error message. But, storage capacity of my instance not change from 200 GB to 250 GB. Any one could help me please?. Thank you

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/a83a6efa-93c9-4a1b-a91a-f5e6b685a942n%40googlegroups.com.

Tuesday, March 30, 2021

[google-cloud-sql-discuss] Re: Consistent "Segmentation fault" error in logs when running a query

Does this query involve either a LEFT JOIN or aggregate functions? We have a very similar issue with Cloud SQL Postgres 12.5, with four different queries causing segmentation faults, and these are the only similarities between then. So far we were not able to reproduce the segfaults on a local PG instance running the same DB and queries, and unfortunately it's not possible to attach a debugger to Cloud SQL (as far as we know).

On Tuesday, March 30, 2021 at 3:23:04 AM UTC+13 e...@contractbook.dk wrote:
We're seeing a "Segmentation fault" error in logs for our staging environment, caused by a single specific DB query.

I am unable to replicate the problem consistently by running the same query manually, but we see the issue in logs basically every day now.

Our environment is: PostgreSQL 13.1, the DB tier is "db-custom-1-3840".

In the 1st half of February, we were seeing "Segmentation fault" in relation to another DB query & different DB instance. That issue somehow got self-resolved around Feb 15.

I'd love to provide more info to diagnose this – is there any other info I could provide that would help solve this case? Otherwise, would it be possible for a Google Cloud SQL engineer look into this for us?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/5e5f5998-ef61-4607-9314-cbf058a1f809n%40googlegroups.com.

Monday, March 29, 2021

[google-cloud-sql-discuss] Re: User with cloudsqlsuperuser denied access to database

In MySQL 8.0 for Cloud SQL, when you create a new user, the user is automatically granted the cloudsqlsuperuser role. This role gives the user all of the MySQL static privileges, except for SUPER and FILE and the following dynamic privileges:

- APPLICATION_PASSWORD_ADMIN
- CONNECTION_ADMIN
- ROLE_ADMIN
- SET_USER_ID
- XA_RECOVER_ADMIN

This link [1] provide you all the details permissions and privileges offers by cloudsqlsuperuser role.

[1]https://cloud.google.com/sql/docs/mysql/users#cloudsqlsuperuser

On Friday, March 26, 2021 at 6:49:07 PM UTC-4 Mark S wrote:
Cloudsql Mysql 8.0
My user has the cloudsuperuserrole, and running a 'use database' command results in access denied. The database I'm attempting to use is not a system database (is not sys, information_schema, or mysql).

Perhaps I'm misunderstanding how the cloudsqlsuperuser role is intended to work, but I expected the user should have full permissions on any non-system database with this role?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/61245ac3-e291-439d-9e79-9b4c6f8d202dn%40googlegroups.com.

[google-cloud-sql-discuss] Consistent "Segmentation fault" error in logs when running a query

We're seeing a "Segmentation fault" error in logs for our staging environment, caused by a single specific DB query.

I am unable to replicate the problem consistently by running the same query manually, but we see the issue in logs basically every day now.

Our environment is: PostgreSQL 13.1, the DB tier is "db-custom-1-3840".

In the 1st half of February, we were seeing "Segmentation fault" in relation to another DB query & different DB instance. That issue somehow got self-resolved around Feb 15.

I'd love to provide more info to diagnose this – is there any other info I could provide that would help solve this case? Otherwise, would it be possible for a Google Cloud SQL engineer look into this for us?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/9d657ab0-edb0-414c-920f-348e8fd511een%40googlegroups.com.

Sunday, March 28, 2021

[google-cloud-sql-discuss] Can't Add Storage Capacity of PostgreSQL Cloud SQL

Hi,

When I try to add storage capacity of my PostgreSQL instance from 200 GB to 250 GB, I don't get any error message. But, storage capacity of my instance not change from 200 GB to 250 GB. Any one could help me please?. Thank you

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/41e6cb9e-b953-4ed8-bc88-4a7a4a56f536n%40googlegroups.com.

Friday, March 26, 2021

[google-cloud-sql-discuss] User with cloudsqlsuperuser denied access to database

Cloudsql Mysql 8.0
My user has the cloudsuperuserrole, and running a 'use database' command results in access denied. The database I'm attempting to use is not a system database (is not sys, information_schema, or mysql).

Perhaps I'm misunderstanding how the cloudsqlsuperuser role is intended to work, but I expected the user should have full permissions on any non-system database with this role?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/65feff56-e795-4e1e-afaf-175f9f552538n%40googlegroups.com.

Thursday, March 25, 2021

[google-cloud-sql-discuss] Re: Unable to connect to CloudSQL Postgres Instance with Private IP

Hello,

Note that Cloud PostgreSQL instances do not support IPv6 and you may temporarily allow all IP addresses to connect to an instance by authorizing 0.0.0.0/0. If the issue still persists, I recommend to create a PRIVATE issue tracker or report the issue via your support package along with your instance ID and the REDACTED output of the gcloud command you are using to connect to the instance with the "--verbosity" flag set to "debug" so that we would be able to dig into the issue.

On Thursday, March 25, 2021 at 11:51:01 AM UTC-4 roc...@gmail.com wrote:
I'm getting the below error message when I'm trying to connect to a new CloudSqL instance through cloud shell.

ERROR: (gcloud.sql.connect) It seems your client does not have ipv6 connectivity and the database instance does not have an ipv4 address. Please request an ipv4 address for this database instance.

Networking has been setup to use a private service connection.  I set this connection to a shared VPC.  I see "Private connection to service" connection and it's attached to an internal range but I cannot connect and get the above error.  Any help or direction would be great.  Thank you

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/233cbe3e-9dab-483d-8eba-c3379bef06ccn%40googlegroups.com.

[google-cloud-sql-discuss] Unable to connect to CloudSQL Postgres Instance with Private IP

I'm getting the below error message when I'm trying to connect to a new CloudSqL instance through cloud shell.

ERROR: (gcloud.sql.connect) It seems your client does not have ipv6 connectivity and the database instance does not have an ipv4 address. Please request an ipv4 address for this database instance.

Networking has been setup to use a private service connection.  I set this connection to a shared VPC.  I see "Private connection to service" connection and it's attached to an internal range but I cannot connect and get the above error.  Any help or direction would be great.  Thank you

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/cae63202-1d77-42f3-bd57-f6af3cb2a7f1n%40googlegroups.com.

Tuesday, March 23, 2021

[google-cloud-sql-discuss] Re: MySQL "Lightweight" vs. "Standard" instances


the legacy machine type name  is mapped to its equivalent string in the db-custom-_CPU_-_RAM_ format.  Here is the table(https://cloud.google.com/sql/docs/mysql/instance-settings#machine-type-2ndgen) you can refer to .


On Tuesday, March 23, 2021 at 11:43:26 AM UTC-4 dmi...@gmail.com wrote:
Hi,

does anyone have any specifics what machine types the "Lightweight" vs. the "Standard" machine type maps to?

I suspect "Lightweight" is based on E2 and "Standard" on N1.

When creating a new instance this is the selection for the machine type now:

Screenshot_20210322_202500.png

It's a bit annoying as the GCP calculator doesn't have a matching selection so figuring out pricing is kinda hard.

Thanks for any hints on how to figure out the pricing!




--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/547fe278-b509-4570-b12f-92001930ca1dn%40googlegroups.com.

[google-cloud-sql-discuss] Re: Private GKE cluster with public endpoint can't connect to public Cloud SQL

Thank you both for your answers! What confused me is that to determine if my pod has access to the Internet I tried fetching www.google.com which surprisingly worked. Trying to fetch any other website fails though so it seems that www.google.com is considered "internal" by GCP.

On Tuesday, March 23, 2021 at 2:40:15 AM UTC-7 nibrass wrote:

Hello,

The Cloud SQL proxy uses instance public IP to connect and as your cluster is private with no internet access from nodes so it is not possible to do that way. To mitigate this issue, you will need to use [private IP][1] for your SQL instance or by configuring the [NAT gateway for your cluster][2] .


Best Regards,

Nibrass 

[1]: https://cloud.google.com/sql/docs/mysql/private-ip

[2]: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine


On Monday, March 22, 2021 at 2:53:23 PM UTC+1 tawat...@tangerine.co.th wrote:
Hi Juliusz,

I think your problem about Cloud NAT & Cloud Router because..
1. GKE private mode use Cloud NAT & Cloud Router for access public
2. CloudSQL proxy connect with public access


Using the proxy with private IP


Thanks,
Tawatchai W.


On Monday, March 22, 2021 at 5:37:44 PM UTC+7 jgo...@gmail.com wrote:
Hi,

I've tried googling but I only find solutions to problems with private Cloud SQL instances. I'd be grateful for any help as I've been banging my head half of the day...

I have a GKE cluster created with this command:

gcloud container clusters create my-cluster \
  --disk-size=10GB \
  --machine-type=e2-small \
  --node-locations=us-central1-b,us-central1-c,us-central1-f \
  --num-nodes=1 \
  --preemptible \
  --release-channel=regular \
  --workload-pool=my-project.svc.id.goog \
  --zone=us-central1-f \
  --no-enable-master-authorized-networks \
  --enable-ip-alias \
  --enable-private-nodes \
  --master-ipv4-cidr 172.16.0.32/28

And a Cloud SQL instance created with:

gcloud services enable sqladmin.googleapis.com
gcloud sql instances create my-db \
  --database-version=POSTGRES_12 \
  --region=us-central1 \
  --storage-auto-increase \
  --storage-size=10 \
  --storage-type=SSD \
  --tier=db-f1-micro

In my pod I have the following sidecar container:

      - name: cloud-sql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.20.2
        command:
          - "/cloud_sql_proxy"
          - "-instances=my-project:us-central1:my-db=tcp:5432"
          - "-term_timeout=20s"
        securityContext:
          runAsNonRoot: true


The pod uses a service account that has been created and configured with these commands:

gcloud iam service-accounts create my-service-account
gcloud iam service-accounts add-iam-policy-binding \
  --role=roles/iam.workloadIdentityUser \
  --member="serviceAccount:my-project.svc.id.goog[default/my-service-account]" \
  my-servic...@my-project.iam.gserviceaccount.com
gcloud projects add-iam-policy-binding my-project \
  --member serviceAccount:"my-servic...@my-project.iam.gserviceaccount.com" \
  --role "roles/cloudsql.client"


Now when I try to connect to Postgres through cloud-sql-proxy in my app, the connection times out with the following error in cloud-sql-proxy's logs:

2021/03/19 21:51:29 couldn't connect to "my-project:us-central1:my-db": dial tcp MY_DB_PUBLIC_IP:3307: connect: connection timed out

Interestingly enough, I can run cloud-sql-proxy on my laptop to connect to the same instance without any problems. I checked my app's container in the pod and it has access to public Internet. What am I missing?

Thanks,
Juliusz

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/11377eaa-073d-4ee1-b306-01ded7c27acan%40googlegroups.com.

[google-cloud-sql-discuss] Re: Private GKE cluster with public endpoint can't connect to public Cloud SQL


Hello,

The Cloud SQL proxy uses instance public IP to connect and as your cluster is private with no internet access from nodes so it is not possible to do that way. To mitigate this issue, you will need to use [private IP][1] for your SQL instance or by configuring the [NAT gateway for your cluster][2] .


Best Regards,

Nibrass 

[1]: https://cloud.google.com/sql/docs/mysql/private-ip

[2]: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine


On Monday, March 22, 2021 at 2:53:23 PM UTC+1 tawat...@tangerine.co.th wrote:
Hi Juliusz,

I think your problem about Cloud NAT & Cloud Router because..
1. GKE private mode use Cloud NAT & Cloud Router for access public
2. CloudSQL proxy connect with public access


Using the proxy with private IP


Thanks,
Tawatchai W.


On Monday, March 22, 2021 at 5:37:44 PM UTC+7 jgo...@gmail.com wrote:
Hi,

I've tried googling but I only find solutions to problems with private Cloud SQL instances. I'd be grateful for any help as I've been banging my head half of the day...

I have a GKE cluster created with this command:

gcloud container clusters create my-cluster \
  --disk-size=10GB \
  --machine-type=e2-small \
  --node-locations=us-central1-b,us-central1-c,us-central1-f \
  --num-nodes=1 \
  --preemptible \
  --release-channel=regular \
  --workload-pool=my-project.svc.id.goog \
  --zone=us-central1-f \
  --no-enable-master-authorized-networks \
  --enable-ip-alias \
  --enable-private-nodes \
  --master-ipv4-cidr 172.16.0.32/28

And a Cloud SQL instance created with:

gcloud services enable sqladmin.googleapis.com
gcloud sql instances create my-db \
  --database-version=POSTGRES_12 \
  --region=us-central1 \
  --storage-auto-increase \
  --storage-size=10 \
  --storage-type=SSD \
  --tier=db-f1-micro

In my pod I have the following sidecar container:

      - name: cloud-sql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.20.2
        command:
          - "/cloud_sql_proxy"
          - "-instances=my-project:us-central1:my-db=tcp:5432"
          - "-term_timeout=20s"
        securityContext:
          runAsNonRoot: true


The pod uses a service account that has been created and configured with these commands:

gcloud iam service-accounts create my-service-account
gcloud iam service-accounts add-iam-policy-binding \
  --role=roles/iam.workloadIdentityUser \
  --member="serviceAccount:my-project.svc.id.goog[default/my-service-account]" \
  my-servic...@my-project.iam.gserviceaccount.com
gcloud projects add-iam-policy-binding my-project \
  --member serviceAccount:"my-servic...@my-project.iam.gserviceaccount.com" \
  --role "roles/cloudsql.client"


Now when I try to connect to Postgres through cloud-sql-proxy in my app, the connection times out with the following error in cloud-sql-proxy's logs:

2021/03/19 21:51:29 couldn't connect to "my-project:us-central1:my-db": dial tcp MY_DB_PUBLIC_IP:3307: connect: connection timed out

Interestingly enough, I can run cloud-sql-proxy on my laptop to connect to the same instance without any problems. I checked my app's container in the pod and it has access to public Internet. What am I missing?

Thanks,
Juliusz

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/f8aa1e2e-3f79-4d40-9989-35b561b1b0can%40googlegroups.com.

Monday, March 22, 2021

[google-cloud-sql-discuss] MySQL "Lightweight" vs. "Standard" instances

Hi,

does anyone have any specifics what machine types the "Lightweight" vs. the "Standard" machine type maps to?

I suspect "Lightweight" is based on E2 and "Standard" on N1.

When creating a new instance this is the selection for the machine type now:

Screenshot_20210322_202500.png

It's a bit annoying as the GCP calculator doesn't have a matching selection so figuring out pricing is kinda hard.

Thanks for any hints on how to figure out the pricing!




--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/90524c18-c8d9-4ab2-891b-82f73a8ad37an%40googlegroups.com.

[google-cloud-sql-discuss] Re: Private GKE cluster with public endpoint can't connect to public Cloud SQL

Hi Juliusz,

I think your problem about Cloud NAT & Cloud Router because..
1. GKE private mode use Cloud NAT & Cloud Router for access public
2. CloudSQL proxy connect with public access


Using the proxy with private IP
https://cloud.google.com/sql/docs/sqlserver/connect-admin-proxy#private-ip


Thanks,
Tawatchai W.

On Monday, March 22, 2021 at 5:37:44 PM UTC+7 jgo...@gmail.com wrote:
Hi,

I've tried googling but I only find solutions to problems with private Cloud SQL instances. I'd be grateful for any help as I've been banging my head half of the day...

I have a GKE cluster created with this command:

gcloud container clusters create my-cluster \
  --disk-size=10GB \
  --machine-type=e2-small \
  --node-locations=us-central1-b,us-central1-c,us-central1-f \
  --num-nodes=1 \
  --preemptible \
  --release-channel=regular \
  --workload-pool=my-project.svc.id.goog \
  --zone=us-central1-f \
  --no-enable-master-authorized-networks \
  --enable-ip-alias \
  --enable-private-nodes \
  --master-ipv4-cidr 172.16.0.32/28

And a Cloud SQL instance created with:

gcloud services enable sqladmin.googleapis.com
gcloud sql instances create my-db \
  --database-version=POSTGRES_12 \
  --region=us-central1 \
  --storage-auto-increase \
  --storage-size=10 \
  --storage-type=SSD \
  --tier=db-f1-micro

In my pod I have the following sidecar container:

      - name: cloud-sql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.20.2
        command:
          - "/cloud_sql_proxy"
          - "-instances=my-project:us-central1:my-db=tcp:5432"
          - "-term_timeout=20s"
        securityContext:
          runAsNonRoot: true


The pod uses a service account that has been created and configured with these commands:

gcloud iam service-accounts create my-service-account
gcloud iam service-accounts add-iam-policy-binding \
  --role=roles/iam.workloadIdentityUser \
  --member="serviceAccount:my-project.svc.id.goog[default/my-service-account]" \
  my-servic...@my-project.iam.gserviceaccount.com
gcloud projects add-iam-policy-binding my-project \
  --member serviceAccount:"my-servic...@my-project.iam.gserviceaccount.com" \
  --role "roles/cloudsql.client"


Now when I try to connect to Postgres through cloud-sql-proxy in my app, the connection times out with the following error in cloud-sql-proxy's logs:

2021/03/19 21:51:29 couldn't connect to "my-project:us-central1:my-db": dial tcp MY_DB_PUBLIC_IP:3307: connect: connection timed out

Interestingly enough, I can run cloud-sql-proxy on my laptop to connect to the same instance without any problems. I checked my app's container in the pod and it has access to public Internet. What am I missing?

Thanks,
Juliusz

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/444d6ee5-d0b1-4a8a-b0f4-6d721ab3857fn%40googlegroups.com.

Friday, March 19, 2021

[google-cloud-sql-discuss] Connecting throught cloud sql proxy failed? in windows. strange error

C:\Program Files (x86)\Google\Cloud SDK>.\cloud_sql_proxy_x64.exe
2021/03/20 10:58:44 Using gcloud's active project: [test-polls-307710]
2021/03/20 10:58:45 Error listing instances in test-polls-307710: Get "https://sqladmin.googleapis.com/sql/v1beta4/projects/test-polls-307710/instances?alt=json&prettyPrint=false": read tcp 192.168.50.20:61059->216.58.200.42:443: wsarecv: An existing connection was forcibly closed by the remote host.


I confirmed that I have create three instances in this project. 

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/98a628a7-434d-45bf-94eb-1439d417d197n%40googlegroups.com.

[google-cloud-sql-discuss] Private GKE cluster with public endpoint can't connect to public Cloud SQL

Hi,

I've tried googling but I only find solutions to problems with private Cloud SQL instances. I'd be grateful for any help as I've been banging my head half of the day...

I have a GKE cluster created with this command:

gcloud container clusters create my-cluster \
  --disk-size=10GB \
  --machine-type=e2-small \
  --node-locations=us-central1-b,us-central1-c,us-central1-f \
  --num-nodes=1 \
  --preemptible \
  --release-channel=regular \
  --workload-pool=my-project.svc.id.goog \
  --zone=us-central1-f \
  --no-enable-master-authorized-networks \
  --enable-ip-alias \
  --enable-private-nodes \
  --master-ipv4-cidr 172.16.0.32/28

And a Cloud SQL instance created with:

gcloud services enable sqladmin.googleapis.com
gcloud sql instances create my-db \
  --database-version=POSTGRES_12 \
  --region=us-central1 \
  --storage-auto-increase \
  --storage-size=10 \
  --storage-type=SSD \
  --tier=db-f1-micro

In my pod I have the following sidecar container:

      - name: cloud-sql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.20.2
        command:
          - "/cloud_sql_proxy"
          - "-instances=my-project:us-central1:my-db=tcp:5432"
          - "-term_timeout=20s"
        securityContext:
          runAsNonRoot: true


The pod uses a service account that has been created and configured with these commands:

gcloud iam service-accounts create my-service-account
gcloud iam service-accounts add-iam-policy-binding \
  --role=roles/iam.workloadIdentityUser \
  --member="serviceAccount:my-project.svc.id.goog[default/my-service-account]" \
  my-service-account@my-project.iam.gserviceaccount.com
gcloud projects add-iam-policy-binding my-project \
  --member serviceAccount:"my-service-account@my-project.iam.gserviceaccount.com" \
  --role "roles/cloudsql.client"


Now when I try to connect to Postgres through cloud-sql-proxy in my app, the connection times out with the following error in cloud-sql-proxy's logs:

2021/03/19 21:51:29 couldn't connect to "my-project:us-central1:my-db": dial tcp MY_DB_PUBLIC_IP:3307: connect: connection timed out

Interestingly enough, I can run cloud-sql-proxy on my laptop to connect to the same instance without any problems. I checked my app's container in the pod and it has access to public Internet. What am I missing?

Thanks,
Juliusz

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/d755064e-0d59-454e-be2c-cd0eb246eb77n%40googlegroups.com.

[google-cloud-sql-discuss] Re: How to connect to Cloudsql postgres(private-ip) from outside the VPC

Hi, 

Could you please provide more information about the IPs from outside your VPC? You could connect to a Cloud SQL instance from external sources over a VPN tunnel or Cloud Interconnect to your VPC network [1]. If you are referring to a serverless environment on Google Cloud such as fully-managed Cloud Run, Cloud Functions, or App Engine Standard, you could use Serverless VPC Access to connect [2].



On Tuesday, March 16, 2021 at 5:30:57 PM UTC-4 nirmeshjai...@gmail.com wrote:

Hello Team, hope you all are doing good , my usecase is such that i want to implement the scenario where my cloudsql postgres has private ip and i want some of the ips from outside my VPC(environment) to talk to this particular Cloudsql instance, So can anyone please help or tell me the steps to implement this scenario , Note: Private ip for cloudsql is mandatory.



--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/b362376c-fd10-4931-b230-16cb23fc18den%40googlegroups.com.

Thursday, March 18, 2021

[google-cloud-sql-discuss] Re: Importing binary logs from an external MySQL database

Hello,

In Cloud SQL you cannot get super privileges. However, since your end goal is to replicate from an external server, please review the relevant documentation in order to do this 1,2,3. If you require additional help in getting it accomplished, let's say you run into errors while doing it, I would recommend you to contact GCP support as they would be able to work with you in getting the replication accomplished. Also, you could try posting any errors you may encounter in Stack Overflow and the community may be able to provide you with some insight since Google Groups is meant for conceptual discussions only versus Stack Overflow which would be a good place to post any error troubleshooting you may need assistance with.  
On Monday, March 15, 2021 at 7:02:21 PM UTC-4 sheepo...@gmail.com wrote:
I would like to replicate a MySQL database that runs on a non-cloud server. I understand that master-slave MySQL replication requires the master to enable incoming connections from the slave. Since the sites I deploy to don't allow that, I opted for periodically generating and exporting binary logs, and then importing the resulting SQL into the Cloud MySQL instance (see attached file).

I run the following command:
>gcloud sql import sql spring-cloud-sql gs://retino_cloud/import_failure.sql

My problem is that the import fails with the error:
ERROR 1227 (42000) at line 1: Access denied; you need (at least one of) the SUPER, SYSTEM_VARIABLES_ADMIN, SESSION_VARIABLES_ADMIN or REPLICATION_APPLIER privilege(s) for this operation

Is there a better way to perform the replication? If not, does GCloud MySQL support import of binary logs (transformed to SQL)?

Thanks,
Yoav.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/f816f3cf-44c1-43df-bb29-72172b6c04b5n%40googlegroups.com.

Tuesday, March 16, 2021

[google-cloud-sql-discuss] How to connect to Cloudsql postgres(private-ip) from outside the VPC


Hello Team, hope you all are doing good , my usecase is such that i want to implement the scenario where my cloudsql postgres has private ip and i want some of the ips from outside my VPC(environment) to talk to this particular Cloudsql instance, So can anyone please help or tell me the steps to implement this scenario , Note: Private ip for cloudsql is mandatory.



--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/60b6d23c-8257-4c05-ac1e-eb9101e3d14an%40googlegroups.com.

Monday, March 15, 2021

[google-cloud-sql-discuss] Importing binary logs from an external MySQL database

/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
# at 4
#210315 12:44:01 server id 1 end_log_pos 123 CRC32 0xc24636dd Start: binlog v 4, server v 5.7.30-0ubuntu0.18.04.1-log created 210315 12:44:01
BINLOG '
cTpPYA8BAAAAdwAAAHsAAAAAAAQANS43LjMwLTB1YnVudHUwLjE4LjA0LjEtbG9nAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAXwAEGggAAAAICAgCAAAACgoKKioAEjQA
Ad02RsI=
'/*!*/;
# at 123
#210315 12:44:01 server id 1 end_log_pos 154 CRC32 0xba57cb2a Previous-GTIDs
# [empty]
# at 154
#210315 12:44:40 server id 1 end_log_pos 219 CRC32 0x99d5ba8c Anonymous_GTID last_committed=0 sequence_number=1 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 219
#210315 12:44:40 server id 1 end_log_pos 296 CRC32 0xcd87ad25 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805080/*!*/;
SET @@session.pseudo_thread_id=7/*!*/;
SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
SET @@session.sql_mode=1436549160/*!*/;
SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;
/*!\C utf8mb4 *//*!*/;
SET @@session.character_set_client=45,@@session.collation_connection=45,@@session.collation_server=8/*!*/;
SET @@session.lc_time_names=0/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
BEGIN
/*!*/;
# at 296
#210315 12:44:40 server id 1 end_log_pos 389 CRC32 0xc1c4480d Table_map: `spring_db`.`Patients` mapped to number 111
# at 389
#210315 12:44:40 server id 1 end_log_pos 511 CRC32 0x5ecf47d9 Update_rows: table id 111 flags: STMT_END_F

BINLOG '
mDpPYBMBAAAAXQAAAIUBAAAAAG8AAAAAAAEACXNwcmluZ19kYgAIUGF0aWVudHMADv7+Dw8P/goP
Dw8PDxEDF+4A7gwAAQABAAH+BAAFQABAAAAFQAAAiC8NSMTB
mDpPYB8BAAAAegAAAP8BAAAAAG8AAAAAAAEAAgAO/////4DPAwAyMjIGAFVTQTIyMgUAQnJ1Y2UA
AAMATGVlAU1vuA9gTzYDBDZPYIDPAwAyMjIGAFVTQTIyMgUAQnJ1Y2UAAAMATGVlAU1vuA9gTzYD
mDpPYNlHz14=
'/*!*/;
# at 511
#210315 12:44:40 server id 1 end_log_pos 542 CRC32 0xebb0aa94 Xid = 2577
COMMIT/*!*/;
# at 542
#210315 12:44:40 server id 1 end_log_pos 607 CRC32 0x44038fd6 Anonymous_GTID last_committed=1 sequence_number=2 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 607
#210315 12:44:40 server id 1 end_log_pos 684 CRC32 0x32e11450 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805080/*!*/;
BEGIN
/*!*/;
# at 684
#210315 12:44:40 server id 1 end_log_pos 750 CRC32 0xd86f29b1 Table_map: `spring_db`.`Visits` mapped to number 112
# at 750
#210315 12:44:40 server id 1 end_log_pos 850 CRC32 0x5db77995 Write_rows: table id 112 flags: STMT_END_F

BINLOG '
mDpPYBMBAAAAQgAAAO4CAAAAAHAAAAAAAAEACXNwcmluZ19kYgAGVmlzaXRzAAX+A/7+/gj+GP6w
7gzuDBCxKW/Y
mDpPYB4BAAAAZAAAAFIDAAAAAHAAAAAAAAEAAgAF//AGMTIzNDU2mDpPYCwxLjIuODI2LjAuMS4z
NjgwMDQzLjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MAYAVVNBMjIylXm3XQ==
'/*!*/;
# at 850
#210315 12:44:40 server id 1 end_log_pos 881 CRC32 0x62e04f2d Xid = 2578
COMMIT/*!*/;
# at 881
#210315 12:44:50 server id 1 end_log_pos 946 CRC32 0xf0bd2ed4 Anonymous_GTID last_committed=2 sequence_number=3 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 946
#210315 12:44:50 server id 1 end_log_pos 1023 CRC32 0xcb0e4e74 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805090/*!*/;
BEGIN
/*!*/;
# at 1023
#210315 12:44:50 server id 1 end_log_pos 1097 CRC32 0x35fb7f8a Table_map: `spring_db`.`Series` mapped to number 110
# at 1097
#210315 12:44:50 server id 1 end_log_pos 1257 CRC32 0xbedc1ccf Write_rows: table id 110 flags: STMT_END_F

BINLOG '
ojpPYBMBAAAASgAAAEkEAAAAAG4AAAAAAAEACXNwcmluZ19kYgAGU2VyaWVzAAr+Awj+Af7+AQQE
Cv4Y/rD+wPcBBAQAA4p/+zU=
ojpPYB4BAAAAoAAAAOkEAAAAAG4AAAAAAAEAAgAK//8A/AYxMjM0NTaYOk9g2wh9NXgBAAAsMS4y
LjgyNi4wLjEuMzY4MDA0My4xMC40NjMuMTIzNDU2LjE2MTU4MDUwODABLjEuMi44MjYuMC4xLjM2
ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEBAFGiqz6yqkQ/zxzcvg==
'/*!*/;
# at 1257
#210315 12:44:50 server id 1 end_log_pos 1288 CRC32 0x71fec81b Xid = 2581
COMMIT/*!*/;
# at 1288
#210315 12:44:53 server id 1 end_log_pos 1353 CRC32 0x21cb7d67 Anonymous_GTID last_committed=3 sequence_number=4 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 1353
#210315 12:44:53 server id 1 end_log_pos 1430 CRC32 0xbd13cfd2 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805093/*!*/;
BEGIN
/*!*/;
# at 1430
#210315 12:44:53 server id 1 end_log_pos 1504 CRC32 0x9b9d2f36 Table_map: `spring_db`.`Images` mapped to number 109
# at 1504
#210315 12:44:53 server id 1 end_log_pos 1658 CRC32 0x4d939f8d Write_rows: table id 109 flags: STMT_END_F

BINLOG '
pTpPYBMBAAAASgAAAOAFAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAADYvnZs=
pTpPYB4BAAAAmgAAAHoGAAAAAG0AAAAAAAEAAgAK//8A/AYxMjM0NTaYOk9gLjEuMi44MjYuMC4x
LjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEBMgAxLjIuODI2LjAuMS4zNjgwMDQz
LjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjEuMQEBAAAAjZ+TTQ==
'/*!*/;
# at 1658
#210315 12:44:53 server id 1 end_log_pos 1689 CRC32 0x7960cb80 Xid = 2588
COMMIT/*!*/;
# at 1689
#210315 12:44:53 server id 1 end_log_pos 1754 CRC32 0x59396a97 Anonymous_GTID last_committed=4 sequence_number=5 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 1754
#210315 12:44:53 server id 1 end_log_pos 1831 CRC32 0x5122bb2a Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805093/*!*/;
BEGIN
/*!*/;
# at 1831
#210315 12:44:53 server id 1 end_log_pos 1905 CRC32 0x0ab3f036 Table_map: `spring_db`.`Images` mapped to number 109
# at 1905
#210315 12:44:53 server id 1 end_log_pos 2179 CRC32 0x3b3172a5 Update_rows: table id 109 flags: STMT_END_F

BINLOG '
pTpPYBMBAAAASgAAAHEHAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAADbwswo=
pTpPYB8BAAAAEgEAAIMIAAAAAG0AAAAAAAEAAgAK/////wD8BjEyMzQ1Npg6T2AuMS4yLjgyNi4w
LjEuMzY4MDA0My4xMC40NjMuMTIzNDU2LjE2MTU4MDUwODAuMQEyADEuMi44MjYuMC4xLjM2ODAw
NDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEuMS4xAQEAAAAA/AYxMjM0NTaYOk9gLjEuMi44
MjYuMC4xLjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEBMgAxLjIuODI2LjAuMS4z
NjgwMDQzLjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjEuMQEBAQAApXIxOw==
'/*!*/;
# at 2179
#210315 12:44:53 server id 1 end_log_pos 2210 CRC32 0x254dbf14 Xid = 2590
COMMIT/*!*/;
# at 2210
#210315 12:44:54 server id 1 end_log_pos 2275 CRC32 0x138e8eaa Anonymous_GTID last_committed=5 sequence_number=6 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 2275
#210315 12:44:54 server id 1 end_log_pos 2352 CRC32 0xc0d61341 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805094/*!*/;
BEGIN
/*!*/;
# at 2352
#210315 12:44:54 server id 1 end_log_pos 2426 CRC32 0xac66ab4d Table_map: `spring_db`.`Images` mapped to number 109
# at 2426
#210315 12:44:54 server id 1 end_log_pos 2580 CRC32 0x4dc75f62 Write_rows: table id 109 flags: STMT_END_F

BINLOG '
pjpPYBMBAAAASgAAAHoJAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAAE2rZqw=
pjpPYB4BAAAAmgAAABQKAAAAAG0AAAAAAAEAAgAK//8A/AYxMjM0NTaYOk9gLjEuMi44MjYuMC4x
LjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEIMgAxLjIuODI2LjAuMS4zNjgwMDQz
LjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjguOAgBAAAAYl/HTQ==
'/*!*/;
# at 2580
#210315 12:44:54 server id 1 end_log_pos 2611 CRC32 0x9a465ad1 Xid = 2596
COMMIT/*!*/;
# at 2611
#210315 12:44:54 server id 1 end_log_pos 2676 CRC32 0x20f36dc8 Anonymous_GTID last_committed=6 sequence_number=7 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 2676
#210315 12:44:54 server id 1 end_log_pos 2753 CRC32 0x292422a4 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805094/*!*/;
BEGIN
/*!*/;
# at 2753
#210315 12:44:54 server id 1 end_log_pos 2827 CRC32 0xe183b8c1 Table_map: `spring_db`.`Images` mapped to number 109
# at 2827
#210315 12:44:54 server id 1 end_log_pos 3101 CRC32 0x445ca0a8 Update_rows: table id 109 flags: STMT_END_F

BINLOG '
pjpPYBMBAAAASgAAAAsLAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAAMG4g+E=
pjpPYB8BAAAAEgEAAB0MAAAAAG0AAAAAAAEAAgAK/////wD8BjEyMzQ1Npg6T2AuMS4yLjgyNi4w
LjEuMzY4MDA0My4xMC40NjMuMTIzNDU2LjE2MTU4MDUwODAuMQgyADEuMi44MjYuMC4xLjM2ODAw
NDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEuOC44CAEAAAAA/AYxMjM0NTaYOk9gLjEuMi44
MjYuMC4xLjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEIMgAxLjIuODI2LjAuMS4z
NjgwMDQzLjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjguOAgBAQAAqKBcRA==
'/*!*/;
# at 3101
#210315 12:44:54 server id 1 end_log_pos 3132 CRC32 0x2e123687 Xid = 2598
COMMIT/*!*/;
# at 3132
#210315 12:44:55 server id 1 end_log_pos 3197 CRC32 0x8d5e12a8 Anonymous_GTID last_committed=7 sequence_number=8 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 3197
#210315 12:44:55 server id 1 end_log_pos 3274 CRC32 0x161d9a20 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805095/*!*/;
BEGIN
/*!*/;
# at 3274
#210315 12:44:55 server id 1 end_log_pos 3348 CRC32 0x2a456445 Table_map: `spring_db`.`Images` mapped to number 109
# at 3348
#210315 12:44:55 server id 1 end_log_pos 3502 CRC32 0xe84a9e73 Write_rows: table id 109 flags: STMT_END_F

BINLOG '
pzpPYBMBAAAASgAAABQNAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAAEVkRSo=
pzpPYB4BAAAAmgAAAK4NAAAAAG0AAAAAAAEAAgAK//8A/AYxMjM0NTaYOk9gLjEuMi44MjYuMC4x
LjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEFMgAxLjIuODI2LjAuMS4zNjgwMDQz
LjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjUuNQUBAAAAc55K6A==
'/*!*/;
# at 3502
#210315 12:44:55 server id 1 end_log_pos 3533 CRC32 0xf8db555a Xid = 2604
COMMIT/*!*/;
# at 3533
#210315 12:44:55 server id 1 end_log_pos 3598 CRC32 0x81b9162d Anonymous_GTID last_committed=8 sequence_number=9 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 3598
#210315 12:44:55 server id 1 end_log_pos 3675 CRC32 0xcc6b4eae Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805095/*!*/;
BEGIN
/*!*/;
# at 3675
#210315 12:44:55 server id 1 end_log_pos 3749 CRC32 0x0546e890 Table_map: `spring_db`.`Images` mapped to number 109
# at 3749
#210315 12:44:55 server id 1 end_log_pos 4023 CRC32 0xe6a543db Update_rows: table id 109 flags: STMT_END_F

BINLOG '
pzpPYBMBAAAASgAAAKUOAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAAJDoRgU=
pzpPYB8BAAAAEgEAALcPAAAAAG0AAAAAAAEAAgAK/////wD8BjEyMzQ1Npg6T2AuMS4yLjgyNi4w
LjEuMzY4MDA0My4xMC40NjMuMTIzNDU2LjE2MTU4MDUwODAuMQUyADEuMi44MjYuMC4xLjM2ODAw
NDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEuNS41BQEAAAAA/AYxMjM0NTaYOk9gLjEuMi44
MjYuMC4xLjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEFMgAxLjIuODI2LjAuMS4z
NjgwMDQzLjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjUuNQUBAQAA20Ol5g==
'/*!*/;
# at 4023
#210315 12:44:55 server id 1 end_log_pos 4054 CRC32 0xf328e0ed Xid = 2606
COMMIT/*!*/;
# at 4054
#210315 12:44:57 server id 1 end_log_pos 4119 CRC32 0x38d3c379 Anonymous_GTID last_committed=9 sequence_number=10 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 4119
#210315 12:44:57 server id 1 end_log_pos 4196 CRC32 0xe43db41b Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805097/*!*/;
BEGIN
/*!*/;
# at 4196
#210315 12:44:57 server id 1 end_log_pos 4270 CRC32 0xe09e9ec7 Table_map: `spring_db`.`Images` mapped to number 109
# at 4270
#210315 12:44:57 server id 1 end_log_pos 4424 CRC32 0x976d67fd Write_rows: table id 109 flags: STMT_END_F

BINLOG '
qTpPYBMBAAAASgAAAK4QAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAAMeenuA=
qTpPYB4BAAAAmgAAAEgRAAAAAG0AAAAAAAEAAgAK//8A/AYxMjM0NTaYOk9gLjEuMi44MjYuMC4x
LjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEJMgAxLjIuODI2LjAuMS4zNjgwMDQz
LjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjkuOQkBAAAA/Wdtlw==
'/*!*/;
# at 4424
#210315 12:44:57 server id 1 end_log_pos 4455 CRC32 0x9bc3a609 Xid = 2612
COMMIT/*!*/;
# at 4455
#210315 12:44:57 server id 1 end_log_pos 4520 CRC32 0x5d22df91 Anonymous_GTID last_committed=10 sequence_number=11 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 4520
#210315 12:44:57 server id 1 end_log_pos 4597 CRC32 0x859f6e86 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805097/*!*/;
BEGIN
/*!*/;
# at 4597
#210315 12:44:57 server id 1 end_log_pos 4671 CRC32 0x71b041c7 Table_map: `spring_db`.`Images` mapped to number 109
# at 4671
#210315 12:44:57 server id 1 end_log_pos 4945 CRC32 0x4d1f0d82 Update_rows: table id 109 flags: STMT_END_F

BINLOG '
qTpPYBMBAAAASgAAAD8SAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAAMdBsHE=
qTpPYB8BAAAAEgEAAFETAAAAAG0AAAAAAAEAAgAK/////wD8BjEyMzQ1Npg6T2AuMS4yLjgyNi4w
LjEuMzY4MDA0My4xMC40NjMuMTIzNDU2LjE2MTU4MDUwODAuMQkyADEuMi44MjYuMC4xLjM2ODAw
NDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEuOS45CQEAAAAA/AYxMjM0NTaYOk9gLjEuMi44
MjYuMC4xLjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEJMgAxLjIuODI2LjAuMS4z
NjgwMDQzLjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjkuOQkBAQAAgg0fTQ==
'/*!*/;
# at 4945
#210315 12:44:57 server id 1 end_log_pos 4976 CRC32 0xbac58293 Xid = 2614
COMMIT/*!*/;
# at 4976
#210315 12:44:57 server id 1 end_log_pos 5041 CRC32 0x4dce1cec Anonymous_GTID last_committed=11 sequence_number=12 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 5041
#210315 12:44:57 server id 1 end_log_pos 5118 CRC32 0x332b9b89 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805097/*!*/;
BEGIN
/*!*/;
# at 5118
#210315 12:44:57 server id 1 end_log_pos 5192 CRC32 0x19fff04f Table_map: `spring_db`.`Images` mapped to number 109
# at 5192
#210315 12:44:57 server id 1 end_log_pos 5346 CRC32 0xa2a4ca65 Write_rows: table id 109 flags: STMT_END_F

BINLOG '
qTpPYBMBAAAASgAAAEgUAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAAE/w/xk=
qTpPYB4BAAAAmgAAAOIUAAAAAG0AAAAAAAEAAgAK//8A/AYxMjM0NTaYOk9gLjEuMi44MjYuMC4x
LjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjECMgAxLjIuODI2LjAuMS4zNjgwMDQz
LjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjIuMgIBAAAAZcqkog==
'/*!*/;
# at 5346
#210315 12:44:57 server id 1 end_log_pos 5377 CRC32 0xfc8013b5 Xid = 2620
COMMIT/*!*/;
# at 5377
#210315 12:44:57 server id 1 end_log_pos 5442 CRC32 0x95abaa61 Anonymous_GTID last_committed=12 sequence_number=13 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 5442
#210315 12:44:57 server id 1 end_log_pos 5519 CRC32 0x14ca3fc0 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805097/*!*/;
BEGIN
/*!*/;
# at 5519
#210315 12:44:57 server id 1 end_log_pos 5593 CRC32 0xae6e1b36 Table_map: `spring_db`.`Images` mapped to number 109
# at 5593
#210315 12:44:57 server id 1 end_log_pos 5867 CRC32 0xbdd9ebd7 Update_rows: table id 109 flags: STMT_END_F

BINLOG '
qTpPYBMBAAAASgAAANkVAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAADYbbq4=
qTpPYB8BAAAAEgEAAOsWAAAAAG0AAAAAAAEAAgAK/////wD8BjEyMzQ1Npg6T2AuMS4yLjgyNi4w
LjEuMzY4MDA0My4xMC40NjMuMTIzNDU2LjE2MTU4MDUwODAuMQIyADEuMi44MjYuMC4xLjM2ODAw
NDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEuMi4yAgEAAAAA/AYxMjM0NTaYOk9gLjEuMi44
MjYuMC4xLjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjECMgAxLjIuODI2LjAuMS4z
NjgwMDQzLjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjIuMgIBAQAA1+vZvQ==
'/*!*/;
# at 5867
#210315 12:44:57 server id 1 end_log_pos 5898 CRC32 0x91c19a6e Xid = 2622
COMMIT/*!*/;
# at 5898
#210315 12:45:00 server id 1 end_log_pos 5963 CRC32 0x3d78d926 Anonymous_GTID last_committed=13 sequence_number=14 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 5963
#210315 12:45:00 server id 1 end_log_pos 6040 CRC32 0x5ac767c3 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805100/*!*/;
BEGIN
/*!*/;
# at 6040
#210315 12:45:00 server id 1 end_log_pos 6114 CRC32 0x6c066691 Table_map: `spring_db`.`Images` mapped to number 109
# at 6114
#210315 12:45:00 server id 1 end_log_pos 6268 CRC32 0xaf315ffb Write_rows: table id 109 flags: STMT_END_F

BINLOG '
rDpPYBMBAAAASgAAAOIXAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAAJFmBmw=
rDpPYB4BAAAAmgAAAHwYAAAAAG0AAAAAAAEAAgAK//8A/AYxMjM0NTaYOk9gLjEuMi44MjYuMC4x
LjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEEMgAxLjIuODI2LjAuMS4zNjgwMDQz
LjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjQuNAQBAAAA+18xrw==
'/*!*/;
# at 6268
#210315 12:45:00 server id 1 end_log_pos 6299 CRC32 0xfc2280ff Xid = 2628
COMMIT/*!*/;
# at 6299
#210315 12:45:00 server id 1 end_log_pos 6364 CRC32 0xe4c34d5e Anonymous_GTID last_committed=14 sequence_number=15 rbr_only=yes
/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 6364
#210315 12:45:00 server id 1 end_log_pos 6441 CRC32 0x343521b4 Query thread_id=7 exec_time=0 error_code=0
SET TIMESTAMP=1615805100/*!*/;
BEGIN
/*!*/;
# at 6441
#210315 12:45:00 server id 1 end_log_pos 6515 CRC32 0x67d46875 Table_map: `spring_db`.`Images` mapped to number 109
# at 6515
#210315 12:45:00 server id 1 end_log_pos 6789 CRC32 0x2e802f43 Update_rows: table id 109 flags: STMT_END_F

BINLOG '
rDpPYBMBAAAASgAAAHMZAAAAAG0AAAAAAAEACXNwcmluZ19kYgAGSW1hZ2VzAAr+A/4B/v7+AQEB
Cv4Y/sDuAPcB9wEAAHVo1Gc=
rDpPYB8BAAAAEgEAAIUaAAAAAG0AAAAAAAEAAgAK/////wD8BjEyMzQ1Npg6T2AuMS4yLjgyNi4w
LjEuMzY4MDA0My4xMC40NjMuMTIzNDU2LjE2MTU4MDUwODAuMQQyADEuMi44MjYuMC4xLjM2ODAw
NDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEuNC40BAEAAAAA/AYxMjM0NTaYOk9gLjEuMi44
MjYuMC4xLjM2ODAwNDMuMTAuNDYzLjEyMzQ1Ni4xNjE1ODA1MDgwLjEEMgAxLjIuODI2LjAuMS4z
NjgwMDQzLjEwLjQ2My4xMjM0NTYuMTYxNTgwNTA4MC4xLjQuNAQBAQAAQy+ALg==
'/*!*/;
# at 6789
#210315 12:45:00 server id 1 end_log_pos 6820 CRC32 0x2319c28b Xid = 2630
COMMIT/*!*/;
# at 6820
#210315 12:45:02 server id 1 end_log_pos 6867 CRC32 0xedeee5f4 Rotate to mysql-bin.018630 pos: 4
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
I would like to replicate a MySQL database that runs on a non-cloud server. I understand that master-slave MySQL replication requires the master to enable incoming connections from the slave. Since the sites I deploy to don't allow that, I opted for periodically generating and exporting binary logs, and then importing the resulting SQL into the Cloud MySQL instance (see attached file).

I run the following command:
>gcloud sql import sql spring-cloud-sql gs://retino_cloud/import_failure.sql

My problem is that the import fails with the error:
ERROR 1227 (42000) at line 1: Access denied; you need (at least one of) the SUPER, SYSTEM_VARIABLES_ADMIN, SESSION_VARIABLES_ADMIN or REPLICATION_APPLIER privilege(s) for this operation

Is there a better way to perform the replication? If not, does GCloud MySQL support import of binary logs (transformed to SQL)?

Thanks,
Yoav.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/f4e14398-fbdf-481c-838f-a565cee9d075n%40googlegroups.com.

Friday, March 12, 2021

[google-cloud-sql-discuss] Re: 32TB limit... per instance? per database? per table?

Hello Adam, 

Cloud SQL is a managed environment. You don't have to install and care directly for your server. One of the consequences of this situation is that the type of machines offered is limited, and you can choose during the initial setup which machine best suits your purposes. For the set of machines offered, the maximum of storage possible is fixed, and depends on the machine type, to the maximum value you mentioned. In other words, it is the architecture of the managed environment for Cloud SQL that fixes these limits. 

On Tuesday, 09 March 2021 at 16:26:28 UTC-5 adam...@gmail.com wrote:
https://cloud.google.com/sql/docs/postgres/quotas
shows a limit of 32TB per instance, but PostgreSQL itself only has a 32 TB limit *per table*.

What's the source of this GCP limit?

What have you done to work around this limit?

thanks!
adam

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/d5accc22-9f2a-4926-a092-c8c31cd28ff2n%40googlegroups.com.

Re: [google-cloud-sql-discuss] Re: How does Cloud SQL (Postgres) import from CSV even work?

For what it's worth, the way I figured out how to do it was to use cloud_sql_proxy and then I could use psql which has a \copy command that is basically just like COPY, but doesn't require superuser privileges. 

On Fri, Mar 12, 2021, 11:08 AM 'Elliott (Google Cloud Platform Support)' via Google Cloud SQL discuss <google-cloud-sql-discuss@googlegroups.com> wrote:

Hello Nate,


I understand that you would like to transform your CSV file to match database table columns. Also you mentioned that there is no documentation available to implement your workflow, namely, how it is used and missing documentation.

I did a check to find a way to do what you want in the documentation but you are right. This way does not currently exist and I apologize for this. I was able to confirm the commands you used but you are correct that they cannot be run as a regular user. The super user must be used and this does not work for you.

I think this is an opportunity to create a feature request to ask the Cloud SQL Specialists to introduce a feature that is needed to satisfy your use case. I was unable to find an existing feature request exactly but I would like to create one on your behalf. I can only imagine how frustrating it must be not to have all the tools available to do your work.

To add value to this feature request, can you please describe how you would like to have the mapping implemented in order for you to do your work?

If you have any other questions or follow ups, you may reply to this thread and we will assist you then.

Thank you for your patience and understanding.


I will wait for your response.



On Monday, March 1, 2021 at 12:44:42 PM UTC-5 nate....@gmail.com wrote:
I need to import approximately 152k rows from an external database. I have the info in a csv file exported from the other database.

I need to transform that csv so it matches the format my database expects. For my local postgres, this is easy, I can format it and use the COPY command in postgres, like this:

COPY users(email, password, pwd_kind, failed_auth_attempts, last_auth_attempt, locked_until, verified)
FROM '/Users/finchnat/import.csv'
DELIMITER ','
CSV HEADER
QUOTE '"';

But CloudSQL doesn't let you have superusers, so I can't use copy.  And the csv import feature doesn't seem to give you any options at all for mapping what's in the csv to the database table columns.

And as far as I can tell there's literally zero information about how to format your csv file so the import feature will know how to map the columns in the csv to the columns in the database table.

So... how do you use it?  And where's the documentation that fills in the gigantic missing piece of how to format your CSV so it works with this feature?

Any help would be appreciated.

-Nate

--
You received this message because you are subscribed to a topic in the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/google-cloud-sql-discuss/R4RPqK_MQ0A/unsubscribe.
To unsubscribe from this group and all its topics, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/a7665540-7acf-40b8-9c8e-0e10368fb18fn%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/CAHMDR%3DAvqn8wPnhRzddwbRUU5yFxkm0FDx4P9qaaV05hJybXhg%40mail.gmail.com.

[google-cloud-sql-discuss] Re: Larger instances with effective_cache_size configured don't support --offload for serverless exports

Hello Jeroen,

I understand that for larger Cloud SQL instances with effective_cache_size set to the maximum results in an error, "invalidFlagValue" when trying to perform a Serverless Export.

The Serverless exports feature lets you export data from your MySQL or PostgreSQL database instances without affecting performance or risk to your production workloads.

I noticed that you have opened a public issue tracker for this and the analyst was able to reproduce the error when trying to perform a serverless export with the effective_cache_size flag set to 4500000.

From that Issue Tracker for the benefit of the community, I am posting your steps here:

1- Create a Postgres instance with effective_cache_size configured to the maximum allowed for a given instance size (8vCPUs, 52GiB, configured to 4500000, but larger instance and same size also triggers this issue).

2- Attempt to start serverless export (SQL, single DB, to GCS bucket in same region).

3- Observe described error

4- Remove flag value

5- Repeat step 2

6- Confirm that export works without flag

Other information (workarounds you have tried, documentation consulted, etc):

Removing effective_cache_size flag works.
Non-serverless exports work regardless of flag config.



She advised that the Cloud SQL Specialists are aware of this issue and are working towards a resolution but there is no ETA right now but you may follow the progress here.

We apologize for the inconvenience and thank you for reporting it.

Thank you.




On Tuesday, March 9, 2021 at 6:26:07 PM UTC-5 Jeroen Visser wrote:
When creating an export using `--offload`, for serverless exports, on an instance that has `effective_cache_size` set to the maximum allowed for the instance size, an error occurs with reason `invalidFlagValue`. When removing the flag, the export can be started.

Is this expected behaviour? Is there a way to avoid this flag being applied to the temporary serverless instance?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/20a70171-ebcd-4a19-b177-25fe0fbf470bn%40googlegroups.com.

[google-cloud-sql-discuss] Re: How does Cloud SQL (Postgres) import from CSV even work?

Hello Nate,


I understand that you would like to transform your CSV file to match database table columns. Also you mentioned that there is no documentation available to implement your workflow, namely, how it is used and missing documentation.

I did a check to find a way to do what you want in the documentation but you are right. This way does not currently exist and I apologize for this. I was able to confirm the commands you used but you are correct that they cannot be run as a regular user. The super user must be used and this does not work for you.

I think this is an opportunity to create a feature request to ask the Cloud SQL Specialists to introduce a feature that is needed to satisfy your use case. I was unable to find an existing feature request exactly but I would like to create one on your behalf. I can only imagine how frustrating it must be not to have all the tools available to do your work.

To add value to this feature request, can you please describe how you would like to have the mapping implemented in order for you to do your work?

If you have any other questions or follow ups, you may reply to this thread and we will assist you then.

Thank you for your patience and understanding.


I will wait for your response.



On Monday, March 1, 2021 at 12:44:42 PM UTC-5 nate....@gmail.com wrote:
I need to import approximately 152k rows from an external database. I have the info in a csv file exported from the other database.

I need to transform that csv so it matches the format my database expects. For my local postgres, this is easy, I can format it and use the COPY command in postgres, like this:

COPY users(email, password, pwd_kind, failed_auth_attempts, last_auth_attempt, locked_until, verified)
FROM '/Users/finchnat/import.csv'
DELIMITER ','
CSV HEADER
QUOTE '"';

But CloudSQL doesn't let you have superusers, so I can't use copy.  And the csv import feature doesn't seem to give you any options at all for mapping what's in the csv to the database table columns.

And as far as I can tell there's literally zero information about how to format your csv file so the import feature will know how to map the columns in the csv to the columns in the database table.

So... how do you use it?  And where's the documentation that fills in the gigantic missing piece of how to format your CSV so it works with this feature?

Any help would be appreciated.

-Nate

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/a7665540-7acf-40b8-9c8e-0e10368fb18fn%40googlegroups.com.

Tuesday, March 9, 2021

[google-cloud-sql-discuss] Larger instances with effective_cache_size configured don't support --offload for serverless exports

When creating an export using `--offload`, for serverless exports, on an instance that has `effective_cache_size` set to the maximum allowed for the instance size, an error occurs with reason `invalidFlagValue`. When removing the flag, the export can be started.

Is this expected behaviour? Is there a way to avoid this flag being applied to the temporary serverless instance?

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/99c73eb6-55e5-45c0-823b-de1d776d0a3bn%40googlegroups.com.

[google-cloud-sql-discuss] 32TB limit... per instance? per database? per table?

https://cloud.google.com/sql/docs/postgres/quotas
shows a limit of 32TB per instance, but PostgreSQL itself only has a 32 TB limit *per table*.
https://www.postgresql.org/docs/12/limits.html

What's the source of this GCP limit?

What have you done to work around this limit?

thanks!
adam

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/aa46cf44-c454-4b6c-ad3e-1301e7a1835bn%40googlegroups.com.

Thursday, March 4, 2021

[google-cloud-sql-discuss] Promoting Read Replica

I have a primary database with one Read replica in a different region created using a Terraform script. Is it possible to promote the read replica as primary using Terraform script and is there any sample script available? Thanks in advance.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/de03eb6b-5b37-42ed-bb42-84b70fc32fbcn%40googlegroups.com.

Re: [google-cloud-sql-discuss] Re: cloud proxy - in cloud build, connection?

hey thanks for the response, this is actually running on the code build servers.. not locally

the proxy seems to just "be there" I did not configure it and it seems to be part of the build image

On Thu, Mar 4, 2021 at 6:37 PM 'Aref Amiri (Cloud Platform Support)' via Google Cloud SQL discuss <google-cloud-sql-discuss@googlegroups.com> wrote:
Hi,

Is there and outbound firewall policy? If yes, you'd have to make sure it allows connections to port 3307 per this public doc.

On Thursday, March 4, 2021 at 10:53:01 AM UTC-5 rcdh wrote:
Hi there,

recently started evaluating gcp for tooling and have a question regarding the cloud-proxy that starts automatically.

I dont seem to be able to connect to it in the documented manner on 127.0.0.1:3307 

Anyone have any experience with this?

```
Step #2 - "migrate": ---------- CONNECT CLOUDSQL ----------
Step #2 - "migrate": cloud_sql_proxy is running.
Step #2 - "migrate": Connections: hardy-clover-******:europe-west3:db-dev-mccmsdemo.
Step #2 - "migrate":
Step #2 - "migrate": ---------- EXECUTE COMMAND ----------
Step #2 - "migrate": sh .cloudbuild/django_migrate.sh
Step #2 - "migrate": 🎸 migrate postgres://*******:***********@127.0.0.1:3307/dbname
Step #2 - "migrate": Traceback (most recent call last):
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
Step #2 - "migrate": self.connect()
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "migrate": return func(*args, **kwargs)
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 200, in connect
Step #2 - "migrate": self.connection = self.get_new_connection(conn_params)
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "migrate": return func(*args, **kwargs)
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
Step #2 - "migrate": connection = Database.connect(**conn_params)
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
Step #2 - "migrate": conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
Step #2 - "migrate": psycopg2.OperationalError: could not connect to server: Connection refused
Step #2 - "migrate": Is the server running on host "127.0.0.1" and accepting
Step #2 - "migrate": TCP/IP connections on port 3307?
Step #2 - "migrate":
Step #2 - "migrate":
Step #2 - "migrate": The above exception was the direct cause of the following exception:
```

--
You received this message because you are subscribed to a topic in the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/google-cloud-sql-discuss/V5rNXCL4os0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/356b52f5-304b-43ea-b865-c1a03293d178n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/CAMtJjOVP%3DseJ9OR%3DXTLb37d-H97%3D1jJm6i%3DkN2_zXskOYW%3Dv6A%40mail.gmail.com.

[google-cloud-sql-discuss] Re: cloud proxy - in cloud build, connection?

Hi,

Is there and outbound firewall policy? If yes, you'd have to make sure it allows connections to port 3307 per this public doc.

On Thursday, March 4, 2021 at 10:53:01 AM UTC-5 rcdh wrote:
Hi there,

recently started evaluating gcp for tooling and have a question regarding the cloud-proxy that starts automatically.

I dont seem to be able to connect to it in the documented manner on 127.0.0.1:3307 

Anyone have any experience with this?

```
Step #2 - "migrate": ---------- CONNECT CLOUDSQL ----------
Step #2 - "migrate": cloud_sql_proxy is running.
Step #2 - "migrate": Connections: hardy-clover-******:europe-west3:db-dev-mccmsdemo.
Step #2 - "migrate":
Step #2 - "migrate": ---------- EXECUTE COMMAND ----------
Step #2 - "migrate": sh .cloudbuild/django_migrate.sh
Step #2 - "migrate": 🎸 migrate postgres://*******:***********@127.0.0.1:3307/dbname
Step #2 - "migrate": Traceback (most recent call last):
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
Step #2 - "migrate": self.connect()
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "migrate": return func(*args, **kwargs)
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 200, in connect
Step #2 - "migrate": self.connection = self.get_new_connection(conn_params)
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
Step #2 - "migrate": return func(*args, **kwargs)
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
Step #2 - "migrate": connection = Database.connect(**conn_params)
Step #2 - "migrate": File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 127, in connect
Step #2 - "migrate": conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
Step #2 - "migrate": psycopg2.OperationalError: could not connect to server: Connection refused
Step #2 - "migrate": Is the server running on host "127.0.0.1" and accepting
Step #2 - "migrate": TCP/IP connections on port 3307?
Step #2 - "migrate":
Step #2 - "migrate":
Step #2 - "migrate": The above exception was the direct cause of the following exception:
```

--
You received this message because you are subscribed to the Google Groups "Google Cloud SQL discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to google-cloud-sql-discuss+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/google-cloud-sql-discuss/356b52f5-304b-43ea-b865-c1a03293d178n%40googlegroups.com.