AWS Green Grass Error IncompatibleGccVersion - amazon-web-services

Hello Everyone i am running AWS Green Grass on my Raspberry pi 3 Which has Raspbian Installed.
Green grass is installed in the root directory. I have configured AWS Green grass and installed all files and Certificates. i am also able to run AWS Green Grass on Raspberry Pi.
The Problem Comes when i go to AWS Groups and click the deploy button its giving me an error Bad Request Error. error code says IncompatibleGGCVersionException and message says Green grass core version 1.9.4 is below minimum required version 1.10.0
However, there is no 1.10.0 version the latest version is 1.9.4 can someone please help

Greengrass core 1.10.0 was released on Nov 25 and GGC can be downloaded from here. Also if a group is created using Easy Group Create, it will add the new stream manager feature by default and this feature requires Java 8 on the Greengrass core device. If you don't want to use the feature, you can go to edit the Group to disable the feature.

Related

Expo error when trying to build: Unsupported SDK version: our app builders don't have support for 33.0.0 version

When trying to build my Expo project, using expo build:ios, I get the following error:
Unsupported SDK version: our app builders don't have support for
33.0.0 version yet. Submitting the app to the Apple App Store may result in an unexpected behaviour Unsupported SDK version
What is causing this error and how can I fix it?
What causes this error?
This error is caused by the fact that as of March 31st 2020 the Expo client no longer supports Expo SDK 33. From the Expo release blog we see the following:
Dropping SDK 33 from the Expo client
We routinely drop SDK versions
that have low usage in order to reduce the number of versions that we
need to support. This release sees the end of life for SDK 33. As
usual, your standalone apps built with SDK 33 will continue to work;
however, SDK 33 projects will no longer work within the latest version
of the Expo client. If you want to re-run expo build, then you’ll need
to upgrade from SDK 33, preferably to SDK 37 so you won’t need to
update again for a while (and also because each Expo version is better
than the last!).
How do I fix this?
To fix this error you need to upgrade the SDK that that your Expo project is using. Ideally you should upgrade to the latest version. In this case Expo SDK 37 as that will give you the longest amount of time until you have to upgrade again.
To upgrade the SDK. Expo has a fantastic resource detailing what you have to do here. Each blog post give steps on how to upgrade.
Here are the basic steps on how to upgrade:
Run expo upgrade in your project directory (requires the latest version of expo-cli, you can update with npm i -g expo-cli).
Make sure to check the changelog for other breaking changes!
Update the Expo app on your phones from the App Store / Google Play. expo-cli will automatically update your apps in simulators if you delete the existing apps, or you can run expo client:install:ios and expo client:install:android.
If you built a standalone app previously, remember that you will need to create a new build in order to update the SDK version. Run expo build:ios and/or expo build:android when you are ready to do a new build for submission to stores.
Make sure you check the changelogs
As you are upgrading from SDK 33 you will need to look at the different changelogs that exist for upgrading from 33 to 34, 34 to 35, 35 to 36, and finally 36 to 37. This is because there could be something that occurred in one of those updates that may break something in your app.
How can I avoid this problem in the future?
Simply, make sure that you keep your apps up-to-date. Upgrading many versions of Expo and/or React-Native can be cumbersome as features are added and removed with each release. The easiest way to stay on top of it is to upgrade frequently, I find that by setting aside a coupe of days a month to check the dependencies etc that I am using are up-to-date means that I do not have to massive upgrades. It also means I am in a better place to know what it going to cause a problem and I have more time to fix it.
tl;dr
Update your Expo SDK version to the latest release.
Incompatible version of SDK..
For me, it worked after updating the Expo app from Google play store..

Incompatible Windows docker image in AWS ECS

I have created a standard Windows cluster in AWS Elastic Container Services (ECS) and am trying to deploy an ASP.Net docker image (microsoft/aspnet:4.7.1-windowsservercore-1709) to it and get the following error
Status reason CannotPullContainerError: a Windows version
10.0.16299-based image is incompatible with a 10.0.14393 host
My application is a ASP.Net WebAPI application using .Net Framework 4.6.1.
My docker file is
FROM microsoft/aspnet:4.7.1-windowsservercore-1709
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
Can anyone suggest what image I could deploy?
Thanks
Change your FROM to aspnet:4.7.1-windowsservercore-ltsc2016 and it should resolve your issue. Keep in mind the image size for this tag is considerably larger than 1709.
We also got the following message when using AWS ECS:
CannotPullContainerError: a Windows version 10.0.16299-based image is incompatible with a 10.0.14393 host
After a lot of trial and error we found that we were using .NetCore SDK 2.2 and AWS ECS wants 2.1. The developer made changes in Visual Studio 2017 and to the Dockerfile to reference 2.1 instead of 2.2. Once that was done ECS was able to consume it and we had a running state.
Unfortunately the error was not as descriptive and we went down the rabbit hole before discovering what really was our problem.

Ambari-agent "CERTIFICATE_VERIFY_FAILED", Is it safe to disable the certificate verification in Python?

Ambari version: 2.2.2.18
HDP stack: 2.4.3
OS: centos 7.3
Issue description:
Ambari-server can't communicate with Ambari agent. I can see below error in the ambari-agent logs:
ERROR 2017-09-18 06:35:34,684 NetUtil.py:84 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-09-18 06:35:34,684 NetUtil.py:85 - SSLError: Failed to connect. Please check openssl library versions.
I am facing this issue recently and it appears this can be replicated consistently after the instances are restarted. (I am using EC2 instances).
I am able to register agent nodes successfully, install HDP cluster, run yarn jobs etc.. no problem at all. Once i restart my instances, I see this problem.
There are some solutions already posted for this problem like:
Downgrade the Python from 2.7 to lower. This is a known problem of
Ambari with Python 2.7
Control the certificate verification by disabling it.
Set "verify = disable"; under /etc/python/cert-verification.cfg
I don't want to play with Python as it can disrupt lot many things like Cassandra, yum package manager etc...
Second work around is very much easy and it works well!
Now comes my question :- Is it safe to disable the certificate verification in Python ? i.e. by setting property verify = disable
Generally, it's a bad idea. If somebody has access to port on server that is used for agent-server communication (8443 if I'm not mistaken), he can register as agent and get all your cluster configs&passwords. Or classic man-in-the-middle attack would allow to do the same by reading your unencrypted traffic. A bit more difficult attack would allow to send commands to agents (probably with root permissions).
Your issue sounds like you reprovisioned your ambari-server host, and left old ambari-agent instances running, or maybe your certificates became outdated? At first connection to ambari-server, agents generate certificates and send to server. Server signs these certificates with it's own key, so now server-agent connection is encrypted. Did you try to remove old certificates and restart server&agents as suggested here?
How did we investigate this issue and What solution we adopted:
Investigation Details:
Downgrading to Python 2.6 is not feasible as there are OS dependencies and as per Suggestion from 'Dmitriusan' in the previous comment, it's not a good idea to disable certificate verification in Python.
We use AWS EC2
With Python 2.7, JDK 1.8 and Cent OS 7.2 there is no issue. Everything is smooth.
With Python 2.7, JDK 1.8 and Cent OS 7.3 and Centos 7.4 we are seeing this issue.
Issue which I have reported here, is with respect to Centos 7.3 and with Centos 7.4 Issue is slightly different. Certificate verification fails while adding nodes to the cluster itself.
Downgrading from centos 7.3 to 7.2 is not straight forward. And AWS EC2 market place provides Centos 7.0 Image and when we create instance from this image, it applies security and patch updates resulting in Centos 7.3.
We can create our own Image of Centos 7.2 from existing servers but, It's always good to be with the latest update for the OS for security reasons.
To describe it shortly, we had workarounds but not a solution.
Solution which we adopted:
After series of tests, we decided to upgrade to Centos 7.4, HDP-2.6.3.0, and Ambari 2.6.0.0
With Centos 7.4 and Ambari Version 2.6.0.0, we don't see this issue even though I have 'Python 2.7.5' installed.
So this looks to be an Issue with Ambari
Older version of Ambari (2.4.2) does not recognize the force TLS configuration. We upgraded Ambari to 2.6.2 and heart beat started working.

Python/Django Elastic Beanstalk now failing on deploy

I'm working on a project that I haven't touched in about 4 months. Before everything on the deploy was working fine, but now I'm getting an error when trying to deploy an update.
Failed to pull Docker image amazon/aws-eb-python:3.4.2-onbuild-3.5.1: Pulling repository amazon/aws-eb-python time="2016-01-17T01:40:45Z" level="fatal" msg="Could not reach any registry endpoint" . Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
In the eb-activity log, it further states [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/03build.sh] : Activity execution failed, because: Pulling repository amazon/aws-eb-python before repeating what was shown in the UI.
The original was using a Preconfigured Docker 64bit Debian jessie v1.3.1 running Python 3.4. I've tried upgrading to the latest, which is version 2.0.6, but it never completes (don't need to get into specifics of that error, separate issue and I'd like to stay on 1.3.1 if possible). I've also tried upgrading to the latest 1.x but it has the same result of upgrading to 2.0.6.
Any ideas, or anything else I should be looking for clues?
Docker Hub has deprecated pulls from Docker clients on 1.5 and earlier. Make sure that your docker client version is at least above 1.5. See https://blog.docker.com/2015/10/docker-hub-deprecation-1-5/ for more information.

Couchbase Community Upgrade - couchbase-server (3.x) conflicts with couchbase-server-community (4.x)

I am trying to upgrade a Couchbase Community server that is currently running 3.0 to 4.0. I am using the 'Amazon Linux' on AWS, and have used the CentOS 6 build to upgrade from 2.5 to 3.0 - that upgrade was super smooth. According to the documentation, I should be able to go from 3.x to 4.x just fine as well.
http://developer.couchbase.com/documentation/server/4.0/install/upgrade-matrix.html:
Upgrade from the latest version 3.x directly to version 4.x using any supported upgrade strategy.
But I get the message
couchbase-server conflicts with couchbase-server-community-4.0.0-4051.x86_64
I have found that the couchbase-server name is now reserved for the enterprise edition, and couchbase-server-community is now used in 4.0 for the community edition, which would explain the conflict. https://issues.couchbase.com/browse/MB-15716
Is this really an upgrade-breaking change? I cannot find any documentation on how to resolve this change short of uninstalling and reinstalling.
If it were me and since you are on AWS, just spin up new instances, install Couchbase on them and do rebalances where you add one in and remove an old one (1 in, 1 out or 2 in, 2 out, etc.). With the same amount going in and out of the cluster, the cluster will do a swap rebalance which is the most efficient. All of this can be done while up and serving traffic. This is a very standard upgrade path and the recommended approach when in the cloud.
Once upgraded, discard the old instances. Yes you are running more instances at the same time during the upgrade, but for the cost of a few lattes you are upgraded smoothly.
I have experienced the same conflict when trying to upgrade from Community version 3.0.1 to Community 4.0.0.
It is worth mentioning that if you uninstall the 3.0.1 version and then install 4.0.0, all your buckets and their data are kept. Maybe there are some cases where this would fail, always good to take a backup first, but in my case the transformation was smooth.
This was on my developer machine, for a cloud installation I really like the swap in/out which means you can do the upgrade without interruption of the service.