In a docker container, the scality/s3server-image is running. I am connecting to it with NodeJS using the #aws-sdk/client-s3 API.
The S3Client setup looks like this:
const s3Client = new S3Client({
region: undefined, // See comment below
endpoint: 'http://127.0.0.1:8000',
credentials: {
accessKeyId: 'accessKey1',
secretAccessKey: 'verySecretKey1',
},
})
Region undefined: this answer to a similar question mentions to leave the region out, but, accessing the region with await s3Client.config.region() still displays eu-central-1, which was the value I passed to the constructor in a previous version. Although I changed it to undefined, it does still take the old configuration. Could that be connected to the issue?
It was possible to successfully create a bucket (test) and it could be listed by running a ListBucketsCommand (await s3Client.send(new ListBucketsCommand({}))).
However, as mentionned in the title, uploading content or streams to the Bucket with
bucketParams = {
Bucket: 'test',
Key: 'test.txt',
Body: 'Test Content',
}
await s3Client.send(new PutObjectCommand(bucketParams))
does not work, instead I am getting a DNS resolution error (which seems odd, since I manually typed the IP-address, not localhost.
Anyway, here the error message:
Error: getaddrinfo EAI_AGAIN test.127.0.0.1
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:72:26) {
errno: -3001,
code: 'EAI_AGAIN',
syscall: 'getaddrinfo',
hostname: 'test.127.0.0.1',
'$metadata': { attempts: 1, totalRetryDelay: 0 }
}
Do you have any idea on
why the region is still configured and/or
why the DNS lookup happens / and then fails, but only when uploading, not when retrieving metadata about the Buckets / creating the Buckets?
For the second question, I found a workaround:
Instead of specifying the IP-Address directly, using endpoint: http://localhost:8000 (so using the Hostname instead of the IP-Adress) fixes the DNS lookup exception. However, there is no obvious reason on why this should happen.
Related
I'm getting
AccessDeniedException: User: arn:aws:iam::[ACCOUNTID]:user/esc-user-name is not authorized to perform: ssm:GetParametersByPath on resource: arn:aws:ssm:eu-central-1:[ACCOUNTID]:* because no identity-based policy allows the ssm:GetParametersByPath action
expection while reading the Second page, while Fist page of variables is fetched successfuly.
While deploying the Next.js application via ECS, I need to get relevant env variables during instance startup. The ESC-related role is permitted to read params on a particular path, like
/my-nextjs-app/stg/
I have no problems getting the expected result for the first page of params with
(code is not exactly the same for simplicity)
const initialCommand = new GetParametersByPathCommand({
Path: "/my-nextjs-app/stg/",
WithDecryption: true,
Recursive: true,
});
const response = await ssmClient.send(initialCommand)
As soon I receive a NextToken in the response, and trying to use it to fetch the next page in kind-of this way:
const nextCommand = new GetParametersByPathCommand({
Path: "/my-nextjs-app/stg/",
WithDecryption: true,
Recursive: true,
NextToken: response.NextToken,
});
await ssmClient.send(nextCommand)
And I got the permission denied error mentioned abowe.
It feels like when NextToken is defined in the command, SSMClient just ignores the Path param, and tries to use Token as source of all required data (I guess it somehow encoded into it, including pagination)
Giving permission to a whole arn:aws:ssm:eu-central-1:[ACCOUNTID]:* is not an option due to security reasons, and feels "dirty" anyway. My assumption is that if SSMClient was able to fetch the first page successfully - it should be able to proceed with the next ones as well with no additional permissions.
Meanwhile, using boto3 - all good with the same user\role.
Is it worth a bug report to #aws-sdk/client-ssm, or is there anything I've missed?
I am trying to connect to AWS Elastic Cache Redis Cluster and i keep getting this I am still getting the
Error MOVED 12218 ip:6379
Following is the code
https://www.npmjs.com/package/redis - redis: ^4.0.1
import {createClient} from "redis";
const client = createClient({url: "redis://xyz.abc.clustercfg.use2.cache.amazonaws.com:6379"});
await client.connect();
console.log("client connected");
console.log(await client.ping());
OUTPUT:
client connected
PONG
But when I do await client.get(key) or await client.set(key, value) I get the MOVED error.
I even followed this https://github.com/redis/node-redis/issues/1782, but yet i am getting the same MOVED 12218 ip:6379 error.
I am hoping you are trying cluster mode enabled redis in aws.
"redis": "^4.1.0".
I am using this redis version
If so then you can try this below code
const redis = require('redis');
const client = redis.createCluster({
rootNodes: [
{
url: `redis://${ConfigurationEndpoint}:${port}`,
},
],
useReplicas: true,
});
Just some info. I realized that I set up a Redis-Cluster Helm chart but I was connecting with Jedis from a Sentinel/Redis setup. Once I changed Jedis to connect with JedisCluster then this error went away. Now this is with a client so your setup might be different, but something to look at.
So i am attempting to create a lambda function inside of my vpc that requires s3 access. Most of the time it goes off without a hitch, however, sometimes it will just hang on s3.getObject until the function times out, there is no error when this happens. I have set up a VPC endpoint and the endpoint is in the route table for the (private) subnet, ensured that access to the endpoint is not being blocked by either the security group or NACL, and IAM permissions all seem to be in order (though if that was the issue one would expect an error message).
I've boiled my code down to a simple get/put for the purposes of debugging this issue, but here it is in case i am missing the incredibly obvious. I've spent hours googling this, tried everything suggested/i can think of... and am basically out of ideas at this point... so i cannot emphasize enough how much i appreciate any help that can be given
Update: i have run my code from an ec2 instance inside the same vpc/subnet/security group as the lambda... seems to not have the same problem, so the issue seems to be with the lambda configuration rather than any of the network configuration
try {
const getParams = {
Bucket: 'MY_BUCKET',
Key: '/path/to/file'
};
console.log('************** about to get', getParams);
const getObject = await s3.getObject(getParams).promise();
console.log('************** gotObject', getObject);
const uploadParams = {
Bucket: 'MY_BUCKET',
Key: '/new/path/to/file',
Body: getObject.Body
};
console.log('************** about to put', uploadParams);
const putObject = await s3.putObject(uploadParams).promise();
console.log('*************** object was put', putObject);
} catch (err) {
console.log('***************** error', err);
}
};```
When updating a Cloudfront distribution using the AWS Node SDK I get the following exception:
message:
'The If-Match version is missing or not valid for the resource.',
code: 'InvalidIfMatchVersion',
time: 2020-08-23T19:37:22.291Z,
requestId: '43a7707f-7177-4396-8013-4a704b7b11ea',
statusCode: 400,
retryable: false,
retryDelay: 57.05741843025996 }
What am I doing wrong here?
This error occurs because you need to first get the current distribution config with cloudfront.getDistributionConfig and then copy the ETag and DistributionConfig fields into your cloudfront.updateDistribution call. You modify the DistributionConfig as needed to achieve the actual config update you're trying to do.
The docs do a poor job of explaining this (the instructions are for the REST API instead of the actual SDK you're using).
The following is an example of updating a Cloudfront distribution to be disabled by pulling the latest distribution config, modifying it, and then performing an update with it:
async function disable_cloudfront_distribution(dist_id) {
// We need to pull the previous distribution config to update it.
const previous_distribution_config = await cloudfront.getDistributionConfig({
Id: dist_id
}).promise();
const e_tag = previous_distribution_config.ETag;
// Update config to be disabled
previous_distribution_config.DistributionConfig.Enabled = false;
// Create update distribution request with distribution ID
// and the copied config along with the returned etag for ifmatch.
const params = {
Id: dist_id,
DistributionConfig: previous_distribution_config.DistributionConfig,
IfMatch: e_tag
};
return cloudfront.updateDistribution(params).promise();
}
I'm trying to call a Cloud Function from another one and for that, I'm following this documentation.
I've created two functions. This is the code for the function that calls the other one:
const {get} = require('axios');
// TODO(developer): set these values
const REGION = 'us-central1';
const PROJECT_ID = 'my-project-######';
const RECEIVING_FUNCTION = 'hello-world';
// Constants for setting up metadata server request
// See https://cloud.google.com/compute/docs/instances/verifying-instance-identity#request_signature
const functionURL = `https://${REGION}-${PROJECT_ID}.cloudfunctions.net/${RECEIVING_FUNCTION}`;
const metadataServerURL =
'http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=';
const tokenUrl = metadataServerURL + functionURL;
exports.proxy = async (req, res) => {
// Fetch the token
const tokenResponse = await get(tokenUrl, {
headers: {
'Metadata-Flavor': 'Google',
},
});
const token = tokenResponse.data;
console.log(`Token: ${token}`);
// Provide the token in the request to the receiving function
try {
console.log(`Calling: ${functionURL}`);
const functionResponse = await get(functionURL, {
headers: {Authorization: `bearer ${token}`},
});
res.status(200).send(functionResponse.data);
} catch (err) {
console.error(JSON.stringify(err));
res.status(500).send('An error occurred! See logs for more details.');
}
};
It's almost identical to the one proposed in the documentation. I just added a couple of logs and I'm stringifying the error before logging it. Following the instructions on that page, I've also added to my hello-world function the permission for the my-project-#######appspot.gserviceaccount.com service account to have the roles/cloudfunctions.invoker role:
$ gcloud functions add-iam-policy-binding hello-world \
> --member='serviceAccount:my-project-#######appspot.gserviceaccount.com' \
> --role='roles/cloudfunctions.invoker'
bindings:
- members:
- allUsers
- serviceAccount:my-project--#######appspot.gserviceaccount.com
role: roles/cloudfunctions.invoker
etag: ############
version: 1
But still, when I call the code above, I get 403 Access is forbidden. I'm sure this is returned by the hello-world function since I can see the logs from the code. I can see the token and I can see the correct URL for the hello-world function in the logs. Also, I can call the hello-world function directly from GCP console. Both of the functions are Trigger type: HTTP and only hello-world function is Ingress settings: Allow internal traffic only. The other one, Ingress settings: Allow all traffic.
Can someone please help me understand what's wrong?
If your Hello world function is in Allow internal only mode this mean:
Only requests from VPC networks in the same project or VPC Service Controls perimeter are allowed. All other requests are rejected.
To reach the functions, you have to call it through your VPC. For this,
Create a serverless VPC connector in the same region of your function (take care, serverless VPC connector is not available in all region!!)
Add it in your second function
Route all the traffic to the serverless VPC connector (I'm not sure that if you route only internal traffic that works)