Setting up local https network to mock amazonaws.com in docker - amazon-web-services

I have a requirement where I need to setup a spoof/mock an AWS server in my local docker compose network... The requirement is to be able to test a set of microservice without letting the microservices know that the endpoint is not actually AWS.
For examples if a microservice, which uses the AWS-SDK, tries to make a service call to create a queue, it makes a call to https://eu-west-1.queue.amazonaws.com. I have a local dns server installed which resolves the same to a reverse proxy server(Traefik) which in turn resolves it to the mock server.
When the service call is made, the service call fails at reverse proxy level stating the below error
traefik_1 | time="2018-10-11T15:11:28Z" level=debug msg="http: TLS handshake error from 10.5.0.7:59058: remote error: tls: unknown certificate authority"
can anyone help me in setting up the system in such a way that the call is made successfully....

You're not going to be able to MITM the https api request and return a different response. You can give the SDK a different url to hit (without https, or with a self-signed cert), and then set up a proxy to proxy requests to amazon when you want them to be send to amazon, and to your other service when you want to mock them.
Some random information on how to change the api request url in the javascript SDK: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/specifying-endpoints.html (as an example)

tls: unknown certificate authority
Based on this error message you need to update the list of trusted CA's in your environment. This needs to be done inside each image (or resulting container) that will connect to your mock service. The process varies based on the base image you select, and this question on unix.se covers many of the methods.
The Debian process:
apt-get install ca-certificates
cp cacert.pem /usr/share/ca-certificates
dpkg-reconfigure ca-certificates
The CentOS process:
cp cacert.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
The Alpine process:
apk add --no-cache ca-certificates
mkdir /usr/local/share/ca-certificates
cp cacert.pem /usr/local/share/ca-certificates/
update-ca-certificates

You are going to struggle/compromise to intercept the AWS API Calls without bypassing the validation of the cert chain.
I suggest that you provide a Custom Endpoint to the AWS SDK Client in your NodeJS code to point to the LocalStack endpoint. This value could be passed using environment variables in your test environments.
var sqsClient = new AWS.SQS(
{endpoint: process.env.SQSCLIENT}
);
Then pass the LocalStack URL into the container for test environments:
docker run mymicroservice -e SQSCLIENT='http://localstack:4576'

Related

How to run 2 WSO2 Identity Server services in the same system

I've used 5.11.0 .deb package and installed it using
sudo dpkg -i packagename
Now i can run wso2 IS as a service by running sudo service wso2is-5.11.0 start
But i dont know how to run one more service, preferably on port 9444
Easily,
Download the WSO2 Identity Server from [here][1].
[1]: https://wso2.com/identity-server/#
Extract the file to a dedicated directory. For the purposes of this scenario, this is referred to as <IS_HOME_PRIMARY> in this topic.
Make a copy of this folder in the same location and rename it. For the purposes of this scenario, this is referred to as <IS_HOME_SECONDARY> in this topic.
By default, the HTTPS port of the primary IS instance is 9443. Let this be left as it is.
There are two ways to set an offset to a port:(9444)
option 1: in the <IS_HOME_SECONDARY>/bin Pass the port offset to the server during startup. The following command starts the server with the default port incremented by 1 :
./wso2server.sh -DportOffset=1
option 2: Set the offset value in the <IS_HOME_SECONDARY>/repository/conf/deployment.toml file as follows:
[server]
offset = ""
Change the Offset value to 1. This changes the HTTPS port in the secondary IS instance to 9444
In this option 2, Install and run the two Identity Server instances.
Go to <IS_HOME_PRIMARY>/bin and <IS_HOME_SECONDARY>/bin in your command line and type the following command for each instance.
On Linux/Solaris: sh wso2server.sh
also as #ycr mentioned you can configure the WSO2 identity server as a Linux service https://is.docs.wso2.com/en/5.11.0/setup/installing-as-a-linux-service/#running-the-product-as-a-linux-service
You can't start the same service multiple times with different port offsets. Hence download the product again unzip it and start it with a port offset as described here.

Vertex AI Custom Container deployment

I have a simple application that have PyTorch model for predicting emotions in the text. The model get's downloaded inside the container when it starts to work.
Unfortunately the deployment in vertex ai fails everytime with the message:
Failed to deploy model "emotion_recognition" to endpoint "emotions" due to the error: Error: model server never became ready. Please validate that your model file or container configuration are valid.
Here is my Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8-slim
COPY requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
WORKDIR /usr/src/emotions
COPY ./schemas/ /emotions/schemas
COPY ./main.py /emotions
COPY ./utils.py /emotions
ENV PORT 8080
ENV HOST "0.0.0.0"
WORKDIR /emotions
EXPOSE 8080
CMD ["uvicorn", "main:app"]
Here's my main.py:
from fastapi import FastAPI,Request
from utils import get_emotion
from schemas.schema import Prediction, Predictions, Response
app = FastAPI(title="People Analytics")
#app.get("/isalive")
async def health():
message="The Endpoint is running successfully"
status="Ok"
code = 200
response = Response(message=message,status=status,code=code)
return response
#app.post("/predict",
response_model=Predictions,
response_model_exclude_unset=True)
async def predict_emotions(request: Request):
body = await request.json()
print(body)
instances = body["instances"]
print(instances)
print(type(instances))
instances = [x['text'] for x in instances]
print(instances)
outputs = []
for text in instances:
emotion = get_emotion(text)
outputs.append(Prediction(emotion=emotion))
return Predictions(predictions=outputs)
I cannot see the cause of error in cloud logging so I am curious about the reason. Please check if my health/predict routes are correct for vertex ai or there's something else I have to change.
I would like to recommend that when deploying an endpoint you should enable the use of logs so that you will get more meaningful information from logs.
This issue could be due to different reasons:
Make sure that the container configuration port is using port 8080 or not.Vertex AI sends liveness checks, health checks, and prediction requests to this port on the container. Your container's HTTP server must listen for requests on this port.
Make sure that you have the required permissions.For this you can
follow this gcp documention and also validate that the account you
are using has enough permissions to read your project's GCS bucket
Vertex AI has some quota limits to verify this you can also
follow this gcp documention.
As per documentation, it should select the default Prediction and health route if you did not specify the path.
If any of the suggestions above work, it’s a requirement to contact GCP Support by creating a support case to fix it. It’s impossible for the community to troubleshoot it without using internal GCP resources

Dynamodb local web shell does not load

I am running DynamoDB locally using the instructions here. To remove potential docker networking issues I am using the "Download Locally" version of the instructions. Before running dynamo locally I run aws configure to set some fake values for AWS access, secret, and region, and here is the output:
$ aws configure
AWS Access Key ID [****************fake]:
AWS Secret Access Key [****************ake2]:
Default region name [local]:
Default output format [json]:
here is the output of running dynamo locally:
$ java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
I can confirm that the DynamoDB is running locally successfully by listing tables using aws cli
$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": []
}
but when I visit http://localhost:8000/shell in my browser, this is the error I get and the page does not load.
I tried running curl on the shell to see if I can get a more useful error message:
$ curl http://localhost:8000/shell
{
"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken",
"Message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."}%
I tried looking up the error above, but I don't have much choice in doing setup when running the shell merely in the browser. Any help is appreciated on how I can run the Dynamodb javascript web shell with this setting.
Software versions:
aws cli: aws-cli/2.4.7 Python/3.9.9 Darwin/20.6.0 source/x86_64 prompt/off
OS: MacOS Big Sur 11.6.2 (20G314)
DynamoDB Local Web Shell was deprecated with version 1.16.X and is not available any longer from 1.17.X to latest. There are no immediate plans for a new Web Shell to be introduced.
You can download an old version of DynamoDB Local < 1.17.X should you wish to use the Web Shell.
Available versions:
aws s3 ls s3://dynamodb-local-frankfurt/
Download most recent working version with Web Shell:
aws s3 ls s3://dynamodb-local-frankfurt/dynamodb_local_2021-04-27.tar.gz .
The next release of DynamoDB Local will have an updated README indicating its deprecation
As I answered in DynamoDB local http://localhost:8000/shell this appears to be a regression in new versions of DynamoDB Local, where the shell mysteriously stopped working, whereas in versions from a year ago it does work.
Somebody should report it to Amazon. If there is some flag that new versions require you to set to enable the shell, it isn't documented anywhere that I can find.
Update JAVA to the latest version and voila, it works!

Requirements for launching Google Cloud AI Platform Notebooks with custom docker image

On AI Platform Notebooks, the UI lets you select a custom image to launch. If you do so, you're greeted with an info box saying that the container "must follow certain technical requirements":
I assume this means they have a required entrypoint, exposed port, jupyterlab launch command, or something, but I can't find any documentation of what the requirements actually are.
I've been trying to reverse engineer it without much luck. I nmaped a standard instance and saw that it had port 8080 open, but setting my image's CMD to run Jupyter Lab on 0.0.0.0:8080 did not do the trick. When I click "Open JupyterLab" in the UI, I get a 504.
Does anyone have a link to the relevant docs, or experience with doing this in the past?
There are two ways you can create custom containers:
Building a Derivative Container
If you only need to install additional packages, ou should create a Dockerfile derived from one of the standard images (for example, FROM gcr.io/deeplearning-platform-release/tf-gpu.1-13:latest), then add RUN commands to install packages using conda/pip/jupyter.
The conda base environment has already been added to the path, so no need to conda init/conda activate unless you need to setup another environment. Additional scripts/dynamic environment variables that need to be run prior to bringing up the environment can be added to /env.sh, which is sourced as part of the entrypoint.
For example, let’s say that you have a custom built TensorFlow wheel that you’d like to use in place of the built-in TensorFlow binary. If you need no additional dependencies, your Dockerfile will be similar to:
Dockerfile.example
FROM gcr.io/deeplearning-platform-release/tf-gpu:latest
RUN pip uninstall -y tensorflow-gpu && \
pip install -y /path/to/local/tensorflow.whl
Then you’ll need to build and push it somewhere accessible to your GCE service account.
PROJECT="my-gcp-project"
docker build . -f Dockerfile.example -t "gcr.io/${PROJECT}/tf-custom:latest"
gcloud auth configure-docker
docker push "gcr.io/${PROJECT}/tf-custom:latest"
Building Container From Scratch
The main requirement is that the container must expose a service on port 8080.
The sidecar proxy agent that executes on the VM will ferry requests to this port only.
If using Jupyter, you should also make sure your jupyter_notebook_config.py is configured as such:
c.NotebookApp.token = ''
c.NotebookApp.password = ''
c.NotebookApp.open_browser = False
c.NotebookApp.port = 8080
c.NotebookApp.allow_origin_pat = (
'(^https://8080-dot-[0-9]+-dot-devshell\.appspot\.com$)|'
'(^https://colab\.research\.google\.com$)|'
'((https?://)?[0-9a-z]+-dot-datalab-vm[\-0-9a-z]*.googleusercontent.com)')
c.NotebookApp.allow_remote_access = True
c.NotebookApp.disable_check_xsrf = False
c.NotebookApp.notebook_dir = '/home'
This disables notebook token-based auth (auth is instead handled through oauth login on the proxy), and allows cross origin requests from three sources: Cloud Shell web preview, colab (see this blog post), and the Cloud Notebooks service proxy. Only the third is required for the notebook service; the first two support alternate access patterns.
To complete Zain's answer, below you can find a minimal example using official Jupyter image, inspired by this repo https://github.com/doitintl/AI-Platform-Notebook-Using-Custom-Container:
Dockerfile
FROM jupyter/base-notebook:python-3.9.5
EXPOSE 8080
ENTRYPOINT ["jupyter", "lab", "--ip", "0.0.0.0", "--allow-root", "--config", "/etc/jupyter/jupyter_notebook_config.py"]
COPY jupyter_notebook_config.py /etc/jupyter/
jupyter_notebook_config.py
(almost the same as Zain's, but with an extra pattern enabling the communication with the kernel; the communication didn't work without it)
c.NotebookApp.ip = '*'
c.NotebookApp.token = ''
c.NotebookApp.password = ''
c.NotebookApp.open_browser = False
c.NotebookApp.port = 8080
c.NotebookApp.allow_origin_pat = '(^https://8080-dot-[0-9]+-dot-devshell\.appspot\.com$)|(^https://colab\.research\.google\.com$)|((https?://)?[0-9a-z]+-dot-datalab-vm[\-0-9a-z]*.googleusercontent.com)|((https?://)?[0-9a-z]+-dot-[\-0-9a-z]*.notebooks.googleusercontent.com)|((https?://)?[0-9a-z\-]+\.[0-9a-z\-]+\.cloudshell\.dev)|((https?://)ssh\.cloud\.google\.com/devshell)'
c.NotebookApp.allow_remote_access = True
c.NotebookApp.disable_check_xsrf = False
c.NotebookApp.notebook_dir = '/home'
c.Session.debug = True
And finally, think about this page while troubleshooting: https://cloud.google.com/notebooks/docs/troubleshooting

Cannot connect to endpoint dynamo db windows 10

I've been following this tutorial (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.DownloadingAndRunning.html) on how to set up a downloadable DynamoDB on my computer, but have been coming across an issue when I try to connect to a local host.
I have checked my host file and everything seems to be ok...
I am using Windows 10 cmd and these are the outputs on my command line:
C:\Users\Desktop\dynamodb_local_latest>java -
D"java.library.path=./DynamoDBLocal_lib" -jar DynamoDBLocal.jar
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: false
shouldDelayTransientStatuses: false
CorsParams: *
C:\Users\Desktop\dynamodb_local_latest>aws dynamodb list-tables --endpoint-
url http://localhost:8000
Could not connect to the endpoint URL: "http://localhost:8000/"
C:\Users\Desktop\dynamodb_local_latest>
Any help will be greatly appreciated!
You must run 'aws configure' and set the required parameters (even if you're only using a local dynamo db emulator, just ignore the access/secret keys)
In addition to running aws configure as mentioned in #J.S.'s answer, you will need to ensure Dynamo DB is running. I recently had this error, where the service had shut down and I didn't realize it. If this is your case, make sure to restart it by going to the folder it is installed in and running java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb &