I am trying to deploy a service to aws using serverless. I am deploying it using gitlab cicd instead of doing it locally. Initially my serverless version was latest(had not mentioned any specific version) but then when I pushed my code to gitlab and i got few errors in the pipeline as the latest version is not stable. So had to change the version to a stable version. Now when i pushed my code changes to gitlab, my deployment failed and i got
Serverless Error ----------------------------------------
Cannot run local installation of the Serverless Framework by the outdated global version. Please upgrade via:
npm install -g serverless Note: Latest release can run any version of the locally installed Serverless Framework.
I dont want to upgrade my serverless version.
in my gitlab-ci.yml i have changed
- npm install -g serverless
to this
- npm install -g serverless#2.69.1
Is there any way I can fix this ?
Any help would be appreciated, thank you.
in your case, the most likely reason for that is the fact that you have a local installation of Serverless or some of the plugins/your other dependencies have Serverless v3 in peer dependencies and install it by default in npm#7 and higher.
To resolve it, either remove local installation or pin the version of the locally installed Serverless (in devDependencies of package.json of your project).
./node_modules/.bin/sls deploy does the trick.
However, the proper answer is in the docs:
There are 2 scenarios:
Using v3 globally, and v2 in specific projects.
This is the simplest. Upgrade the global version to v3, and install v2 in specific projects (via NPM). The serverless command will automatically run the correct version (v3 can run v2).
Using v2 globally, and v3 in specific projects.
To achieve that, install v3 in specific projects (via NPM). Then, use serverless for v2 projects, and npx serverless for v3 projects.
https://www.serverless.com/framework/docs/guides/upgrading-v3#using-v2-and-v3-in-different-projects
Related
I am currently running Google Cloud Composer with a Composer version 2.0.9 and airflow version 2.1.4. I am trying install the most recent version of dbt (1.0.4 for core and 1.0.0 for the BigQuery plugin). Because cloud composter images has specific packages installed, I am getting conflicting PyPI dependency issues. When I try to fix one dependency another issue occurs. Does anyone know the specific set of packages installed that would resolve this issue? I have read the following posts by the community but I wanted to know if anyone has a solution for just using composer?
How to run DBT in airflow without copying our repo
How to set up dbt with Google Cloud Composer?
I was able to reproduce the behaviour you are seeing. Below are the dependency conflicts I saw in the Cloud Build logs. These conflicts are occurring between the dbt-core requirements and the pre-installed package requirements in Composer.
Pre-installed package requirements:
hologram 0.0.14 has requirement jsonschema<3.2,>=3.0, but you have jsonschema 3.2.0. ##=> can be installed manually
flask 1.1.4 has requirement click<8.0,>=5.1, but you have click 8.1.2.
apache-airflow 2.1.4+composer has requirement markupsafe<2.0,>=1.1.1, but you have markupsafe 2.0.1.
looker-sdk 22.4.0 has requirement typing-extensions>=4.1.1, but you have typing-extensions 3.10.0.2.
dbt-core requirements:
hologram 0.0.14 has requirement jsonschema<3.2,>=3.0, but you have jsonschema 3.2.0. ##=> can be installed manually
dbt-core 1.0.4 has requirement click<9,>=8, but you have click 7.1.2.
dbt-core 1.0.4 has requirement MarkupSafe==2.0.1, but you have markupsafe 1.1.1.
dbt-core 1.0.4 has requirement typing-extensions<3.11,>=3.7.4, but you have typing-extensions 4.1.1.
I tried downgrading the pre-installed packages, but subsequent package installations fail and it is not recommended as well.
Therefore, I would suggest using an external solution as stated in this thread you have linked. Quoting the workarounds given in #Ryan Yuan's answer here.
Using external services to run dbt jobs, e.g. Cloud Run.
Using Composer's KubernetesPodOperator(updated Composer 2 link). My colleague has put up a nice article on dbt discourse here going through the setup process.
Ignoring Composer's Dependency conflicts by setting Composer's environmental variable IGNORE_PYPI_DEPENDENCY_CONFLICTS to True.
However, I don't recommend this as it may cause potential issues.
Creating a Python virtual environment in Composer and install the dbt packages.
As mentioned by #Kabilan Mohanraj, the current version of dbt (1.0.4) and a more recent version of Composer has dependency issues (Composer version 2.0.9 and Airflow version 2.1.4). Therefore an alternative solution is needed. In my case, I played around and searched for a solution from other people in the community and found one person using a certain version of Composer and dbt that only had mimimal dependency issues. However, as mentioned by #Kabilan Mohanraj, Google does not recommend downgrading preinstalled packages, so this would not be a viable solution for something in production.
create composer through gcloud to use an older version that is not available via the Composer UI
gcloud composer environments create my_airflow_dbt_example
--location us-central1
--image-version composer-1.17.9-airflow-2.1.4
requirements
dbt-bigquery==0.21.0
jsonschema==3.1.1
packaging==20.9
For this specific composer version, you are downgrading jsonschema from 3.2.0 to 3.1.1 and packaging from 21.3 to 20.9
I have issues with cdk while trying to bundle lambdas with esbuild while working in my WSL2 debian
esbuild is installed as a global npm package and also in devDependencies of my cdk project
node --version
v14.16.0
cdk --version
1.95.1
esbuild --version
0.11.2
Examples of lambda definition
lex_create_bot = _lambda_node.NodejsFunction(
self,
id="lambda-lex-create-bot",
entry="lambdas_fns/lex_create_bot/lex-create-bot.ts",
handler="handler",
runtime=_lambda.Runtime.NODEJS_14_X,
bundling={"minify": True}
)
Everytime I try to deploy, check diff, cdk try to bundle the lambdas with docker instead of esbuild.
I work on this stack for a while and eveything was fine until I switched from remote-container to WSL2 to manage my dev environement in vscode.
docker is really slow for bundling and creates diff for already deployed lambdas that have no code changes.
Any idea how to solve this ?
EDIT
Same issue with Ubuntu-20.04 WSL2
I upgrade to cdk 1.97.0 and esbuild 0.11.5 this morning and all is working well now.
Still a strange behavior that I want to avoid in the future, if anyone got a more generic solution to this problem ...
I'm trying to schedule a task on ECS (running on Fargate), via CDK. When it runs, my task uses the LATEST platform version (which is currently 1.3.0, despite 1.4.0 having been released). I'd like to use version 1.4.0, but I can't see a way to specify that in the CDK objects - it only seems to be supported when creating a FargateService, whereas I'm using a FargateTaskDefinition. Does anyone know if this is possible? (It's possible via the direct API, and via the console).
You have to upgrade your aws-cdk first.
npm install -g aws-cdk
Then upgrade the package.json
npx npm-check -u
the latest "#aws-cdk/aws-ecs": 1.72 allows you to specify the platform version in ecs task:
new EcsTask({
...
platformVersion: FargatePlatformVersion.VERSION1_4
})
I am using Loopback 4 to create a REST-ful API. I'm a mobile developer by trade so typescript et al is all pretty new to me, so please be kind ;)
I created the app using CLI v1.21.4, and saw a message to say that an update is available. I therefore updated my global installation of the CLI. But then when I try and run one of the commands such as lb4 model I see the message:
The project was originally generated by #loopback/cli#1.21.4.
The following dependencies are incompatible with #loopback/cli#1.23.1:
typescript: ~3.5.3 (cli ~3.6.3)
#loopback/authentication: ^2.2.2 (cli ^3.1.1)
I would of course like to take advantage of these newer modules, but I am unsure how to update my app scaffolding and dependencies. Could anyone offer some advice please?
Please check out https://github.com/strongloop/loopback-next/issues/3608:
During lb4 app, we add the cli version to .yo.rc.json, such as:
{
"#loopback/cli": {
"version": "1.21.4"
}
}
lb4 -v lists compatible modules that are released with the cli.
lb4 commands check if the project has incompatible versions with the current cli and prompts users to force or exit.
I would of course like to take advantage of these newer modules, but I am unsure how to update my app scaffolding and dependencies.
The process for updating dependencies is not specific to LoopBack. If you are using npm, then simply run npm update.
Please note that TypeScript often introduces backwards-incompatible changes in semver-minor releases, 3.6 brought few of them. Be prepared to manually fix few compilation errors after the upgrade.
I think that npm update is not going to jump from v2 to v3 for #loopback/authentication, you have to request that upgrade explicitly:
$ npm install #loopback/authentication#latest
There is now a supported update procedure, which is documented here:
https://loopback.io/doc/en/lb4/Update-generator.html
It seems to be simply:
# Ensure you have the latest version of the CLI tool
npm install -g #loopback/cli
# Then ask the tool to check which packages should be upgraded
lb4 update
I wish to implement the answer that is outlined here: https://stackoverflow.com/a/50397276/1980516
However, I find that I keep running into Unable to import module 'index' at exactly this line:
const _archiver = require('archiver');
So, I'm guessing that I cannot do this via the online console. Instead, I probably have to create a deployment package.
How do I go about this? I apparently need AWS CLI, Node.js, npm and I'm new to all of it. In the Amazon docs I can't find a practical list of how to set up my local development environment.
What tools do I install, which versions and in what order exactly?
Edit: Windows :)
My guess is that you need to npm install archiver and package the node_modules dependencies along with your index.js (handler file for your lambda entry point). You can zip up and deploy/upload it to your lambda.
Also have a look at https://github.com/serverless/serverless framework, that will do these type of things easier.
Have a look at AWS SAM, the Serverless Application Model. It provides a local development setup for things like Lambda functions and API Gateway endpoints, and a way to easily package and deploy things. The exact steps you need are:
Create an AWS account and an IAM user with admin privileges
Install node.js
Install the AWS CLI (and configure it with aws configure)
Install SAM CLI and Docker (the local instances run in docker containers)
Initialize a new SAM project with sam init --runtime nodejs (or other runtime version if need)
Run through the quickstart to get an idea of how to define a SAM template, build a SAM app, and deploy.
If you don't want to use the framework or local development environment and just want to create the source bundle, there are docs. The gist is:
Install nodejs (e.g. using homebrew or an installer)
npm install the modules you need
Zip up your code including the node_modules folder
Upload the zip via the AWS Console