GCP Composer Airflow 2 - Is the Preview version stable? - google-cloud-platform

On the 13th of May 2021 GCP added composer-1.17.0-preview.0-airflow-2.0.1 to their composer version list.
It has already been upgraded a few times since then with "Full support end date" for each version, however it is still flagged as a Preview version.
I have already created a composer instance of composer-1.17.0-preview.7-airflow-2.0.2 and it seems to work smooth.
My question is what is the meaning of this "preview version"?
Is it production worthy? If not, what is the purpose of it?

I think it should run smoothly and is ready to run production traffic, the main thing you don't get with preview, I believe, the main difference is that this version has no full guarantees that "official" release has.
I think this is mainly about https://cloud.google.com/composer/docs/concepts/versioning/composer-versioning-overview#version-deprecation-and-support - so you might expect that you might need to migrate to newer officially supported version of Airflow sooner than those, and that some, particular things will not be fully supported during the migration.
So while you do not have all the benefits of running a managed service (which typically frees you from worrying about maintenance pretty much completely), in this case you might expect some small maintenance and migration overhead when the officially supported version is released.
However my opinion is that it should be production-ready in general and if you are considering starting your Airflow installatiom, Airlfow 2.0.2 is a good choice. At the recent Composer Airflow Summit talk the Composer team mentioned that they are going to move out of preview for Airflow 2 pretty soon. Also Airflow 2 as a product released by the community have moved a long way in Airflow 2 than it was in 1.10.
Unlike 1.10.* versions - Airflow 2 fully follows SemVer approach. This means that migration to 2.* versions should be easy and backwards compatible. Airflow community treats the "SemVer" approach and promises very seriously https://github.com/apache/airflow#semantic-versioning
So I'd say you should expect very little disruptions even if you have to migrate sooner rather than later to newer version of Airlfow in Composer.

Composer's support for Airflow 2 is now on the GA level. Please, take a look at the release notes: https://cloud.google.com/composer/docs/release-notes#September_15_2021

Related

How often do Cloud Build Node.js versions update?

I couldn't stomach purchasing the $150 for GCP's support service for this one question. I'm just looking to understand the schedule for Cloud Build Node.js versions. It's still stuck on Node.js v10.10 and my projects are starting to require higher versions to build. According to Cloud Build's changelog, I don't believe the Node.js version has updated in years. Any ideas?
As per the official Github repository:
Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.
So, this means it should work with Node.js 12 and the updates should be more constant. In addition to that, in here, it says that if you are using a Cloud Build config file, you can use Node.js 12, so the Node.js' latest version should be compatible with Cloud Build.
To summarize, by the repository, it should follow Node.js schedule. However, in case you think this is not occurring, I would recommend you to raise a bug on the Google's Issue Tracker - it's free, by the way - so they can assess this.

How can I upgrade my existing AI platform-pipelines deployment to a newer version?

I currently have a deployment of AI platform-pipelines v0.2.5 running. I saw that 8 days ago, a new version v0.5.1 got added to the container registry. There have been a lot of changes, fixes, etc.. between these versions and I would like to update my current deployment. Is there an easy way of doing so, without losing my experiments, pipeline runs, etc..?

How can I deploy code changes in AWS in a CircleCI 1.0 project?

CircleCI is requiring everyone to migrate to version 2.0 of their configuration format, but I have not had time to move away. I plan to migrate away from the platform anyway, so I do not now wish to migrate to 2.0.
Even after the EOL of 1.0, I was still able to deploy minor code changes, which was all that was necessary to maintain my system at this time.
However, a minor code change I tried to deploy earlier this week failed.
I don't want to migrate to 2.0, but want to deploy the code change (it's 2 lines).
I'm using Github and deploying to AWS.
How I can "circumvent" CircleCI to push this minor code change in Github to AWS?
The 1.0 version was officially "sunsetted" on August 31st, 2018, but I think they might have unofficially given an extension to folks still on 1.0, and it is only now fully being turned off. However, 2.0 has been available for more than a year now, which should have been long enough to do the migration.
My experience of CircleCI support is that if you have a bug, they will help on their forum. Paying customers can log a ticket (and that is a more reliable way to contact them). Of course, if your support issue is to re-enable 1.0, or to do the migration for you, then you do not have a reasonable request.
The trade-off with hosted CI, which will have saved you maintenance time and costs over the long term, is that sometimes migrations will be necessary. Engineering teams should schedule upgrade time into their diary so that these things can be tackled in good time.
Direct answer
For your deployment now, I suggest you run your tests locally and deploy from a development machine. After that, I would suggest upgrading to 2.0 (or move to a different provider) is a high priority for your team.
In other words, I do not believe there is a way in which you can do a deployment using a 1.0 configuration file. However, if you won't move to 2.0, and you do not wish to do a deployment from your own dev machine, you could try asking tech support whether you (or they) can do a special 1.0 run. It is conceivable they still have the capacity to do so.

Migrating Sitecore 6.6 to Sitecore 8

Recently Sitecore 8 has released and it has came up with lot of exciting new features. So our team decided to move from Sitecore 6.6 to Sitecore 8. Before migrating, i would like to know what all things i should be having in handy. Such as, .net Framework, Hardware configuration, environment etc.
Also, i would like to know the procedure to migrate from 6.6 to 8? I, never involved in sitecore migration project before. Please suggest me some good articles or post here your thoughts.
Thanks in advance. :)
See the Sitecore Compatibility Table for the .NET Framework, SQL Server version and Windows version.
Two common approaches.
1) Follow the Sitecore upgrade path.
2) Package the content, and start with a clean install.
Currently I working on a upgrade with an scripted upgrade that follow the Sitecore path. So I can easy repeat the steps and have the latest content in the databases.
I have some of my findings put down here Sitecore update and modules this article contain also a Related links section. Such as the upgrade white paper from Varun
Depending on how 'cluttered' your existing instance is, you may also want to consider installing a fresh copy of Sitecore 8 and then migrate your data/code to avoid all the hops that would be necessary to get to 8.
May be the following blog might help. Take a look at it.
https://varunvns.wordpress.com/2014/11/11/sitecore-version-upgrade-whitepaper/
I would recommend you make a backup of your site to use as a "sandbox" for the upgrade. Copy your databases and the web root for your site to a new location and then set up an IIS site with appropriate permissions pointing to your copy, and change your connection strings in the copy to point to a copy of the databases you backed up.
Perform the update there and ensure everything is working correctly. Work slowly to make sure you are following instructions correctly and note any special actions you had to take to perform the upgrade. Once you have it upgraded, perform the same process on the "real" site.
If you work with a Sitecore partner, I would highly encourage you to discuss the process with them to learn more specifics about the risks and challenges you may encounter during your upgrade.

Are daily builds the way to go for a web app?

Joel seems to think highly of daily builds. For a traditional compiled application I can certainly see his justification, but how does this parallel over to web development -- or does it not?
A bit about the project I'm asking for --
There are 2 developers working on a Django (Python) web app. We have 1 svn repository. Each developer maintains a checkout and thier own copy of MySQL running locally (if you're unfamiliar with Django, it comes bundled with it's own test server, much the way ASP apps can run inside of Visual Studio). Development and testing are done locally, then committed back to the repository. The actual working copy of the website is an SVN checkout (I know about SVN export and it takes too long). The closest we have to a 'build' is a batch file that runs an SVN update on the working copy, does the django bits ('manage.py syncdb'), updates the search engine cache (solr), then restarts apache.
I guess what I don't see is the parallel to web apps.
Are you doing a source controlled web app with 'nightly builds' -- if so, what does that look like?
You can easily run all of your Django unit tests through the Django testing framework as your nightly build.
That's what we do.
We also have some ordinary unit tests that don't leverage Django features, and we run those, also.
Even though Python (and Django) don't require the kind of nightly compile/link/unit test that compiled languages do, you still benefit from the daily discipline of "Don't Break The Build". And a daily cycle of unit testing everything you own is a good thing.
We're in the throes of looking at Python 2.6 (which works perfectly for us) and running our unit tests with the -3 option to see which deprecated features we're using. Having a full suite of unit tests assures us that a change for Python 3 compatibility won't break the build. And running them nightly means that we have to be sure we're refactoring correctly.
Continuous integration is useful if you have the right processes around it. TeamCity from JetBrains is a great starting point if you want to build familiarity:
http://www.jetbrains.com/teamcity/index.html
There's a great article that relates directly to Django here:
http://www.ajaxline.com/continuous-integration-in-django-project
Hope this gets you started.
Web applications built in dynamic languages may not require a "compilation" step, but there can still be a number of "build" steps involved in getting the app to run. Your build scripts might install or upgrade dependencies, perform database migrations, and then run the test suite to insure that the code is "clean" w.r.t. the actual checked-in version in the repository. Or, you might deploy a copy of the code to a test server, then run a set of Selenium integration tests against the new version to insure that core site functionality still works.
It may help to do some reading on the topic of Continuous Integration, which is a very useful practice for webapp dev teams. The more fast-paced and agile your development process, the more you need regular input from automated testing and quality metrics to make sure you fail fast and loud on any broken version of the code.
If it's really just you and one other developer working on it, nightly builds are probably not going to give you much.
I would say that the web app equivalent of nightly builds would be staging sites (which can be built nightly).
Where nightly builds to a staging area start paying real dividends is when you have clients, project managers, and QA people that need to be able to see an up to date, but relatively stable version of the app. Your developer sandboxes (if you're like me, at least) probably spend a lot of time in an unusable state as you're breaking things trying to get the next feature implemented. So the typical problem is that a QA person wants to verify that a bug is fixed, or a PM wants to check that some planned feature was implemented correctly, or a client wants to see that you've made progress on the issue that they care about. If they only have access to developer sandboxes, there's a good chance that when they get around to looking at it, either the sandbox version isn't running (since it means ./manage.py runserver is up in a terminal somewhere) or it's in a broken state because of something else. That really slows down the whole team and wastes a lot of time.
It sounds like you don't have a staging setup since you just automatically update the production version. That could be fine if you're way more careful and disciplined than I (and I think most developers) am and never commit anything that isn't totally bulletproof. Personally, I'd rather make sure that my work has made it through at least some cursory QA by someone other than me before it hits production.
So, in conclusion, the setup where I work:
each developer runs their own sandbox locally (same as you do it)
there's a "common" staging sandbox on a dev server that gets updated nightly from a cronjob. PMs, clients, and QA go there. They are never given direct access to developer sandboxes.
There's an automated (though manually initiated) deployment to production. A developer or the PM can "push" to production when we feel things have been sufficiently QA'd and are stable and safe.
I'd say the only downside (besides a bit of extra overhead setting up the nightly staging builds) is that it makes for a day of turnaround on bug verification. ie, QA reports a bug in the software (based on looking at that day's nightly build), developer fixes bug and commits, then QA must wait until the next day's build to check that the bug is actually fixed. It's usually not that much of a problem since everyone has enough stuff going on that it doesn't affect the schedule. When a milestone is approaching though and we're in a feature-frozen, bugfix only mode, we'll do more frequent manual updates of the staging site.
I've had great success using Hudson for continuous integration. Details on using Hudson with Python by Redsolo.
A few months ago, several articles espousing continuous deployment caused quite a stir online. IMVU has details on how they deploy up to 5 times a day.
The whole idea behind frequent builds (nightly or more frequent like in continuous integration) is to get immediate feedback in order to reduce the elapsed time between the introduction of a problem and its detection. So, building frequently is useful only if you are able to generate some feedback through compilation, (ideally automated) testing, quality checks, etc. Without feedback, there is no real point.