Want to deploy a Google Cloud Run service via Terraform - google-cloud-platform

I want to deploy a google cloud run service using terraform. When I try to deploy via 'port' block for defining the container port, getting error, I have to pass the container port from template tag but unable to do that. Here is my .tf file -
resource "google_cloud_run_service" "default" {
name = "cloudrun-srv"
location = "us-central1"
template {
spec {
containers {
image = "us.gcr.io/xxxxxx/xxxx.app"
port {
container_port = 19006
}
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
output "url" {
value = "${google_cloud_run_service.default.status[0].url}"
}
With the port tag, here is the error -
And if I not pass the Port block, here is the error -
I have to pass the container port value as 19006 because of my container is running on that port only. How I pass the container port 19006 instead of default port 8080.

I had a look at REST API exposed by Google for creating a Cloud Run service.
This starts with the entry here:
POST https://{endpoint}/apis/serving.knative.dev/v1/{parent}/services
where the body contains a Service.
which contains a ServiceSpec
which contains a RevisionRemplate
which contains a RevisionSpec
which contains a Container
which contains a ContainerPort
If we now map this to the source of the Terraform extension to handle creation of Cloud Run Services, we find:
https://github.com/terraform-providers/terraform-provider-google/blob/2dc3da62e3844d14fb2136e09f13ea934b038411/google/resource_cloud_run_service.go#L90
and in the comments, we find the following:
In the context of a Revision, we disallow a number of the fields of
this Container, including: name, ports, and volumeMounts. The runtime
contract is documented here:
https://github.com/knative/serving/blob/master/docs/runtime-contract.md
While name and volumeMounts seems ok to me at this point, I'm not sensing the reason that ports are not mapped.
From this though, I seem to see that the inability to specify a port through Terraform seems to be explicit as opposed to an omission. I also seem to see that the ability to specify a port is indeed present in the REST API at Google.
I was then going to suggest that you raise a defect through Github but then wondered if it was already present. I did some digging and there is already a request for the missing feature:
Allow specifying 'container_port' and 'request_timeout' for google_cloud_run_service
My belief is that the core answer to your question then becomes:
What you are trying to do should work with Terraform and has been
raised as an issue and we must wait for the resolution in the
Terraform provider.

The block should be ports (i.e. plural), not port

I needed to expose port 9000 and solved it this way:
resource "google_cloud_run_service" "service" {
...
template {
spec {
containers {
...
ports {
container_port = 9000
}
}
}
}
}

Related

AWS Keyspace DSBulk unload failed, "Token metadata not present"

Getting error when trying to unload or count data from AWS Keyspace using dsbulk.
Error:
Operation COUNT_20221021-192729-813222 failed: Token metadata not present.
Command line:
$ dsbulk count/unload -k my_best_storage -t book_awards -f ./dsbulk_keyspaces.conf
Config:
datastax-java-driver {
basic.contact-points = [ "cassandra.us-east-2.amazonaws.com:9142"]
advanced.auth-provider {
class = PlainTextAuthProvider
username = "aw.keyspaces-at-XXX"
password = "XXXX"
}
basic.load-balancing-policy {
local-datacenter = "us-east-2"
}
basic.request {
consistency = LOCAL_QUORUM
default-idempotence = true
}
advanced {
request{
log-warnings = true
}
ssl-engine-factory {
class = DefaultSslEngineFactory
truststore-path = "./cassandra_truststore.jks"
truststore-password = "XXX"
hostname-validation = false
}
metadata {
token-map.enabled = false
}
}
}
dsbulk load - loading operator works fine...
I suspect the problem here is that your cluster is using the proprietary com.amazonaws.cassandra.DefaultPartitioner partitioner which most open-source tools and drivers don't recognise.
The DataStax Bulk Loader (DSBulk) tool uses the Cassandra Java driver under the hood to connect to Cassandra clusters. The Java driver uses the partitioner to determine which nodes own tokens [ranges]. Only the following Cassandra partitioners are supported:
Murmur3Partitioner
RandomPartitioner
ByteOrderedPartitioner
Since the Java driver doesn't know about DefaultPartitioner, it doesn't have a map of token range owners (token metadata) and so can't determine how to "split" the Cassandra ring to query the nodes.
As you already figured out, this doesn't affect the load command because it simply sends writes to coordinators and lets the coordinators figure out how the data is partitioned. But for unload and count commands which require reads, the Java driver can't determine which coordinators to pick for sub-range queries with an unsupported partitioner.
Maybe as a workaround you can try to disable token-awareness with:
$ dsbulk count [...]
--driver.advanced.metadata.token-map.enabled false
but I don't have an AWS Keyspaces cluster I could test and I'm doubtful it will work. In any case, you're welcome to try.
There is an outstanding DSBulk feature request to provide the ability to completely disable token-awareness (internal ticket ID DAT-622) but it is unassigned at the time of writing so I'm not in a position to provide any expectation on when it will be prioritised. Cheers!
Amazon Keyspaces now supports multiple partitioners including MurMr3Partitioner. See the following to update your partitioner. You will also want to set token-map.enabled to true.
metadata {
token-map.enabled = true
}
Additionally, if you are using VPC Endpoints you will need the following permissions to make sure that you will see available peers.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"ListVPCEndpoints",
"Effect":"Allow",
"Action":[
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcEndpoints"
],
"Resource":"*"
}
]
}
I would also recommend increasing the connection pool size for the data load process.
advanced.connection.pool.local.size = 3
Finally, I would recommend using AWS glue instead of DSBulk. DSBulk is single process tool and will not scale for larger data loads. Additionally, learning glue will be helpful in managing other aspects of the data lifecycle. See my example on how to unload/export data using AWS Glue.

STANDARD network tier is not supported for global address

I'd like to add an A-type DNS name on GCP with the following Terraform code:
data "google_dns_managed_zone" "env_dns_zone" {
name = "env-zone"
}
resource "google_compute_global_address" "argo_events_webhook" {
name = "argo-events-webhook"
}
/*
resource "google_dns_record_set" "argo-events-webhook" {
name = "argo-events-webhook.${data.google_dns_managed_zone.env_dns_zone.dns_name}"
managed_zone = data.google_dns_managed_zone.env_dns_zone.name
rrdatas = [google_compute_global_address.argo_events_webhook.address]
ttl = 600
type = "A"
}
*/
(The out commented part is not causing the error but maybe relevant as it shows more info about what I want to achieve)
But this yields the following error message ...
...
module.gke.google_compute_global_address.argo_events_webhook: Creating...
Error: Error creating GlobalAddress: googleapi: Error 400: STANDARD network tier (the project's default network tier) is not supported: STANDARD network tier is not supported for global address., badRequest
... for which I can't find more information. Does somebody have an idea how to solve this?
What I find confusing is that there are A-level entries added and my terraform code is c/p'ed from their corresponding tf code (+ adjustment of names).
The Standard Network Tier doesn't use the Google global fiber network and use the "standard internet", locally to the region. If you use global address, the address is globally reachable and thus you need to use the premium network tier to access to this feature.
more details here
In your case, you have to update the project configuration to Premium Network Tier. You can achieve this with Terraform
resource "google_compute_project_default_network_tier" "default" {
network_tier = "PREMIUM"
}

An Issue with an AWS EC2 instance WebSocket connection failed: Error in connection establishment: net::ERR_CONNECTION_TIMED_OUT

As I tried to run the chat app from localhost connected to MySQL database which had been coded with PHP via WebSocket it was successful.
Also when I tried to run from the PuTTY terminal logged into SSH credentials, it was displaying as Server Started with the port# 8080
ubuntu#ec3-193-123-96:/home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server$ php websocket_server.php
PHP Fatal error: Uncaught React\Socket\ConnectionException: Could not bind to tcp://0.0.0.0:8080: Address already in use in /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/vendor/react/socket/src/Server.php:29
Stack trace:
#0 /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/vendor/cboden/ratchet/src/Ratchet/Server/IoServer.php(70): React\Socket\Server->listen(8080, '0.0.0.0')
#1 /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server/websocket_server.php(121): Ratchet\Server\IoServer::factory(Object(Ratchet\Http\HttpServer), 8080)
#2 {main}
thrown in /home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/vendor/react/socket/src/Server.php on line 29
ubuntu#ec3-193-123-96:/home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server$
So I tried to change the port#8080 to port# 8282, it was successful
ubuntu#ec3-193-123-96:/home/admin/web/ec3-193-123-96.eu-central-1.compute.amazonaws.com/public_html/application/libraries/server$ php websocket_server.php
Keeping the shell script running, open a couple of web browser windows, and open a Javascript console or a page with the following Javascript:
var conn = new WebSocket('ws://0.0.0.0:8282');
conn.onopen = function(e) {
console.log("Connection established!");
};
conn.onmessage = function(e) {
console.log(e.data);
};
From the browser console results:
WebSocket connection to 'ws://5.160.195.94:8282/' failed: Error in
connection establishment: net::ERR_CONNECTION_TIMED_OUT
websocket_server.php
<?php
use Ratchet\Server\IoServer;
use Ratchet\Http\HttpServer;
use Ratchet\WebSocket\WsServer;
use MyApp\Chat;
require dirname(__DIR__) . '/vendor/autoload.php';
$server = IoServer::factory(
new HttpServer(
new WsServer(
new Chat()
)
),
8282
);
$server->run();
I even tried to assign Public IP and Private IP, but with no good it resulted in the same old result?
This was the composer files generated after executing and adding src folder $composer require cboden/ratchet
composer.json(On AmazonWebServer)
{
"autoload": {
"psr-4": {
"MyApp\\": "src"
}
},
"require": {
"cboden/ratchet": "^0.4.1"
}
}
composer.json(On localhost)
{
"autoload": {
"psr-4": {
"MyApp\\": "src"
}
},
"require": {
"cboden/ratchet": "^0.4.3"
}
}
How am I suppose to resolve/overcome while connecting it from the WebSocket especially from the hosted server with the domain name such as
http://ec3-193-123-96.eu-central-1.compute.amazonaws.com/
var conn = new WebSocket('ws://localhost:8282');
From the Security Group
Under Inbound tab
Under Outbound tab
When it comes to a connectivity issue with an EC2 there are few things you need to check to find the root cause.
SSH into the EC2 instance that the application is running and make sure you can access it from within the EC2 instance. If it works then its a network related issue that we need to solve.
If step 1 was successful. You have now identified it is a network issue to solve this you need to check the following.
Check if an Internet Gateway is created and attached to your VPC.
Next check if your subnets routing table has its default route pointing to the internet gateway. check this link to complete this and the above step.
Check your subnets Network ACLs rules to see if ports are not blocked
finally, you would want to check your Instances Security group as you have shown.
If you need access via a EC2 dns you will need to provision your ec2 instance in a public subnet and assign an elastic IP
If an issue still exists check if the EC2 status checks pass, or try provisioning a new instance.

How to setup Akka Cluster in multiple machine?

I have looked into offical Akka documentation and I confuse. I follow this link and I used the same application.conf as like that and change the seed-node into my another machine ip.
akka {
actor.provider = "akka.cluster.ClusterActorRefProvider"
remote.netty.tcp.port=0
remote.netty.tcp.hostname=127.0.0.1
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#slave01:2551",
"akka.tcp://ClusterSystem#slave02:2552"]
auto-down-unreachable-after = 10s
}
extensions = ["akka.cluster.client.ClusterClientReceptionist"]
persistence {
journal.plugin = "akka.persistence.journal.leveldb-shared"
journal.leveldb-shared.store {
# DO NOT USE 'native = off' IN PRODUCTION !!!
native = off
dir = "target/shared-journal"
}
snapshot-store.plugin = "akka.persistence.snapshot-store.local"
snapshot-store.local.dir = "target/snapshots"
}
}
The problem is that it's said it's unreachable and connection refused. Any suggestion?
Have you made sure connection isn't blocked by firewall on the two hosts? I would first check whether both slave01 and slave02 are remotely reachable using telnet at their corresponding port (e.g. telnet slave02 2552). And in case slave01 and slave02 are hostnames or FQDN, they would need to be mapped to corresponding IP addresses in /etc/hosts or DNS, accordingly.

aws opsworks multiple nodejs apps?

I have one OpsWorks Nodejs Stack. I setup multiple nodejs apps. The problem now is that all nodejs server.js scripts listens on port 80 for amazon life check but the port can be used only by one.
I dont know how to solve this. I have read amazon documentation but could not find the solution. I read that I could try to change deploy recipe variables to set this life check to different port but it didn't work. Any help?
I battled with this issue for a while and eventually found a very simple solution.
The port is set in the deploy cookbook's attributes...
https://github.com/aws/opsworks-cookbooks/blob/release-chef-11.10/deploy/attributes/deploy.rb
by the line...
default[:deploy][application][:nodejs][:port] = deploy[:ssl_support] ? 443 : 80
you can override this using the stack's custom json, such as:
{
"deploy" : {
"app_name_1": {
"nodejs": {
"port": 80
}
},
"app_name_2": {
"nodejs": {
"port": 3000
}
}
},
"mongodb" : {
...
}
}
Now the monitrc files at /etc/monit.d/node_web_app-.monitrc should reflect their respective ports, and monit should keep them alive!
My solution was to implement life check node service that is listening on port 80. When amazon life check request is made to that service it responds and execute its own logic to check for health of all services. It works great.