I am trying to run a sonarqube app on the AWS fargate platform. When I run the raw docker image it works like a charm. But If I pass the JDBC properties to the container as an argument I am facing the following issue. Apparently, the elastic search needs a new config. If it is an ECS cluster I would have ssh into the EC2 instances and update these properties. In the case of fargate, how do I achieve this?
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
From the github issue seems like it not possible as there is no EC2 instance or Host ivolvole in Fargate.
the workarounds for the max_map_count error appear to be setting max_memory_map directly on the host (which may result in undesirable side effects, or using the sysctl flag on on the docker run command. Unfortunately, neither of these options are not supported in Fargate since it involves interacting with the container instance itself.
But the other way is to increase file limit and disable mmap check.
I had to properly configure U limits on my ECS task definition, something like:
"ulimits": [
{
"name": "nofile",
"softLimit": 65535,
"hardLimit": 65535
}
]
I've disabled mmap in ElasticSearch, which gets rid of the max_map_count setting requirement. This can be done by configuring the sonar.search.javaAdditionalOpts SonarQube setting. I wasn't able to do it with an environment variable since ECS seems to be eating them, but in the end I just passed it as a parameter to the container, which works since the entrypoint is set and consumes arguments properly. In my case:
"command": [
"-Dsonar.search.javaAdditionalOpts=-Dnode.store.allow_mmapfs=false"
]
sonarqube disable nmap
Adding the variable discovery.type = single-node under the "Environment Variables" section (edit the es container) in the task definition resolved the issue for me. If you're using the json file, then add the following section
"environment": [
{
"name": "discovery.type",
"value": "single-node"
}
],
Unfortunately, I don't know why that worked.
Related
i need to use metadata user-data for few instances (on linux FCOS) larger than 256kb, but i got:
is too large: maximum size 262144 character
https://cloud.google.com/compute/docs/metadata/setting-custom-metadata#limitations
It's possible to:
anyhow use larger than 256kb metadata user-data?
or fetch user-data for instance from a remote server e.g.: I could upload this user-data to any HTTP server?
I found.. solution via remote ignition file.
I tried increasing the 256kb per single metadata limit via google support but they cant increase this for me.
Workaround via remote ignition file:
In Fedora CoreOS user-data add the below snippet
On Nginx share a big ignition file larger than 256kb
Start instance, instance will connect to remote Nginx to get big ignition file
{
"ignition": {
"config": {
"merge": [
{
"source": "http://xxxx/file.ign"
}
]
},
"version": "3.2.0"
}
}
Within Amazon Eventbridge, I'm listening for Transcribe events such as the following:
{
"source": ["aws.transcribe"],
"detail-type": ["Transcribe Job State Change"],
"detail": {
"TranscriptionJobStatus": ["FAILED", "COMPLETED"]
}
}
I need to send this event to development (via Ngrok), to staging, and to production, each time with a query parameter indicating which environment triggered the transcription.
Having worked on this simple usecase for a full day, it simple seems bugged:
The first rule, target and connection that I set up functions fine.
Adding additional targets to this rule do not function
Brand new rules I add to receive and handle the events do not work
Deleting everything and rebuilding, again, the first rule, target and connection will function (even if this is to a different environment.)
So for example, I've had dev but not staging working, ripped it all down, and then rebuilt, and ended up with staging but not dev working.
What on earth is going on?
We fixed it after considerable trial and error.
It's not clear what the root cause is. Likely issues were:
When a role is created for your connections the first time, it will be created with the correct permissions. Check all IAM permissions with a fine tooth comb.
AWS Events only fire when the triggering system and the rules are in the same region (e.g. us-east-1)
To help debug, export the cloud formation templates for systems that are working and are not working to see if there are any differences in the set-up.
The rest of this answer is simply going to be advice to anyone considering using AWS EventBridge: run.
I'm trying to deploy chromedp/headless-shell to Cloud Run.
Here is my Dockerfile:
FROM chromedp/headless-shell
ENTRYPOINT [ "/headless-shell/headless-shell", "--remote-debugging-address=0.0.0.0", "--remote-debugging-port=9222", "--disable-gpu", "--headless", "--no-sandbox" ]
The command I used to deploy to Cloud Run is
gcloud run deploy chromedp-headless-shell --source . --port 9222
Problem
When I go to this path /json/list, I expect to see something like this
[{
"description": "",
"devtoolsFrontendUrl": "/devtools/inspector.html?ws=localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E",
"id": "B06F36A73E5F33A515E87C6AE4E2284E",
"title": "about:blank",
"type": "page",
"url": "about:blank",
"webSocketDebuggerUrl": "ws://localhost:9222/devtools/page/B06F36A73E5F33A515E87C6AE4E2284E"
}]
but instead, I get this error:
Host header is specified and is not an IP address or localhost.
Is there something wrong with my configuration or is Cloud Run not the ideal choice for deploying this?
This specific issue is not unique to Cloud Run. It originates from an existing change in the Chrome DevTools Protocol which generates this error when accessing it remotely. It could be attributed to security measures against some types of attacks. You can see the related Chromium pull request here.
I deployed a chromedp/headless-shell container to Cloud Run using your configuration and also received the same error. Now, there is this useful comment in a GitHub issue showing a workaround for this problem, by passing a HOST:localhost header. While this does work when I tested it locally, it does not work on Cloud Run (returns a 404 error). This 404 error could be due to how Cloud Run also utilizes the HOST header to route requests to the correct service.
Unfortunately this answer is not a solution, but it sheds some light on what you are seeing and why. I would go for using a different service from GCP, such a GCE that are pure virtual machines and less managed.
I am trying to run a very simple custom command "echo helloworld" in GoCD as per the Getting Started Guide Part 2 however, the job does not finish with the Console saying Waiting for console logs and raw output saying Console log for this job is unavailable as it may have been purged by Go or deleted externally.
My job looks like the following which was taken from typing "echo" in the Lookup Command (which is different to the Getting Started example which I tried first with the same result)
Judging from the screenshot, the problem seems to be that no agent is assigned to the task. For an agent to be assigned, it must satisfy all of these conditions:
An agent must be running, and connected to the server
The agent must be enabled on the "Agents" page
If you use environments, the job and the agent need to be in the same environment
The agent needs to have all of the resources assigned that are configured in the job
Found the issue.
The Pipelines have to be in the same Environment to work.
I have a microservice that I would normally have used Lambda for but it occasionally takes longer than 5 mins. So I created a docker container and set it up so that every time I run it, it does it's business and then stops. It works great and I'm happy with it.
What I'm not happy with is the the ECS "Last status" for the task shows "STOPPED" in red letters and shows the "Stopped reason" as "Essential container in task exited". Is there some way to make this show "Success" in green and/or change the stopped reason to "Successful termination" or similar?
I wonder if you flipped the bit that indicates "essential" in the container definition, perhaps it wouldn't be considered an error. A "service" is a long running thing so maybe what you are really looking for is to just run a task from the aws cli, and not have it live as a service with 0 running tasks. Another option would be in the service something running as an api that on request runs the task via aws sdk.