We currently use Google Optimize for A/B Testing, which involves copy/pasting multiple javascript snippets into the Optimize website for each variant.
I'd like to be able to do this from our CI/CD pipeline, rather than it being a manual and potentially error-prone task.
I haven't been able to find any Optimize APIs, other than for use in the browser.
Is there a programmatic way to send variant scripts to the Optimize platform automatically?
Thanks :)
Related
This is a design approach question where we are trying to pick the best option between Apache Beam / Google Dataflow and Cloud Run to pull data from HTTP endpoints (source) and put them down the stream to Google BigQuery (sink).
Traditionally we have implemented similar functionalities using Google Dataflow where the sources are files in the Google Storage bucket or messages in Google PubSub, etc. In those cases, the data arrived in a 'push' fashion so it makes much more sense to use a streaming Dataflow job.
However, in the new requirement, since the data is fetched periodically from an HTTP endpoint, it sounds reasonable to use a Cloud Run spinning up on schedule.
So I want to gather pros and cons of going with either of these approaches, so that we can make a sensible design for this.
I am not sure this question is appropriate for SO, as it opens a big discussion with different opinions, without clear context, scope, functional and non functional requirements, time and finance restrictions including CAPEX/OPEX, who and how is going to support the solution in BAU after commissioning, etc.
In my personal experience - I developed a few dozens of similar pipelines using various combinations of cloud functions, pubsub topics, cloud storage, firestore (for the pipeline process state managemet) and so on. Sometimes with the dataflow as well (embedded into the pipelieines); but never used the cloud run. But my knowledge and experience may be not relevant in your case.
The only thing I might suggest - try to priorities your requirements (in a whole solution lifecycle context) and then design the solution based on those priorities. I know - it is a trivial idea, sorry to disappoint you.
I have a quick point of confusion regarding the whole idea of "Infrastructure as a Code" or IaaS provisioning with tools like Terraform.
I've been working on a team recently that uses Terraform to provision all of its AWS resources, and I've been learning it here and there and admit that it's a pretty nifty tool.
Besides Infrastructure as Code being a "cool" alternative to manually provisioning resources in the AWS console, I don't understand why it's actually useful though.
Take, for example, a typical deployment of a website with a database. After my initial provisioning of this infrastructure, why would I ever need to even run the Terraform plan again? With everything I need being provisioned on my AWS account, what are the use cases in which I'll need to "reprovision" this infrastructure?
Under this assumption, the process of provisioning everything I need is front-loaded to begin with, so why do I bother learning tools when I can just click some buttons in the AWS console when I'm first deploying my website?
Honestly I thought this would be a pretty common point of confusion, but I couldn't seem to find clarity elsewhere so I thought I'd ask here. Probably a naive question, but keep in mind I'm new to this whole philosophy.
Thanks in advance!
Manually provisioning, in the long term, is slow, non-reproducible, troublesome, not self-documenting and difficult to do in teams.
With tools such as terraform or CloudFormation you can have the following benefits:
Apply all the same development principles which you have when you write a traditional code. You can use comments to document your infrastructure. You can track all changes and who made these changes using software version control system (e.g. git).
you can easily share your infrastructure architecture. Your VPC and ALB don't work? Just post your terraform code to SO or share with a colleague for a review. Its much easier then sharing screenshots of your VPC and ALB when done manually.
easy to plan for disaster recovery and global applications. You just deploy the same infrastructure in different regions automatically. Doing the same manually in many regions would be difficult.
separation of dev, prod and staging infrastructure. You just re-use the same infrastructure code across different environments. A change to dev infrastructure can be easily ported to prod.
inspect changes before actually performing them. Manual upgrades to your infrastructure can have disastrous effects due to domino effect. Changing one, can change/break many other components of your architecture. With infrastructure as a code, you can preview the changes and have good understanding what implications can be before you actually do the change.
work team. You can have many people working on the same infrastructure code, proposing changes, testing and reviewing.
I really like the #Marcin's answer.
Here some additional points from my experience:
As for software version control case you not only can see history/authors, perform code review, but also treat infrastructural changes as product features. Let's say for example you're adding CDN support to your application so you have to make some changes in your infrastructure (to provision a cloud CDN service), application (to actually support and work with CDN) and your pipelines (to deliver static to CDN, if you're using this approach). If all changes related to this new feature will be in a one single branch - all feature related changes will be transparent for everyone in the team and can be easily tracked down later.
Another thing related to version control - is have ability to easily provision and destroy infrastructures for review apps semi-automatically using triggers and capabilities of your CI/CD tools for automated and manual testing. It's even possible to run automated tests for your changes in infrastructure declaration.
If you working on multiple similar project or if your project requires multiple similar but isolated from each other environment, IaC can help save countless hours of provisioning and tracking down everything. Although it's not always silver bullet, but in almost all cases it helps with saving time and avoiding most of accidental mistakes.
Last but not least - it helps with seeing bigger picture if you working with hybrid or multicloud environments. Not as good as infrastructural diagrams, but diagrams might not be always up date unlike your code.
I like fast code execution (because of that I switched from Python to Go) and I do not like dependencies. Amazon recommends using SDK for simpler authentication (but in Lambda I can get tokens from IAM from environment variables) and because of built into SDK retry on errors (few lines of code, as I think). Yes it is faster to write my code using SDK, but what additional caveats about using pure HTTP API instead of SDK? Am I too crazy about milliseconds? Such optimizations worth it?
Anything you do with AWS is the result of an API call, whether executed by CLI, Web console, or SDK.
The SDKs make it easier to interact with those APIs. While you may be able to come up with some minor improvements for some calls, overall you will spend a lot of time doing it to very little benefit.
I think the stated focus on performance belies real trade-offs.
Consider that someone will have to maintain your code -- if you use an API, the test area is small, but AWS APIs might change or be deprecated; if you an SDK, next programmer will plug in new SDK version and hope that it works, but if it doesn't they'd be bogged down by sheer weight of the SDK.
Likewise, imagine someone needs to do a security review of this app, or to introduce something not yet covered by SDK (let's imagine propagating accounting group from caller role to underlying storage).
I don't think there is a clear answer.
Here are my suggestions:
keep it consistent -- either API or SDK (within given app)
consider the bigger picture (how many apps do you plan to write?)
don't be afraid to switch to the other approach later
I've had to decide on something similar in the past, with Docker (much nicer APIs and SDKs/libs). Here's how it played out:
For testing, we ended up using beta version of Docker Python bindings: prod version was not enough, and bindings (your SDK) were overall pretty good and clear.
For log scraping, I used HTTP calls (your API), "because performance", in reality comparative mental load using API vs SDK, and because bindings (SDK) did not support asyncio.
I'm trying to understand the real-world usefulness of AWS CloudFormation. It seems to be a way of describing AWS infrastructure as a JSON file, but even then I'm struggling to understand what benefits that serves (besides potentially "recording" your infrastructure changes in VCS).
Of what use does CloudFormation's JSON files serve? What benefits does it have over using the AWS web console and making changes manually?
CloudFormation gives you the following benefits:
You get to version control your infrastructure. You have a full record of all changes made, and you can easily go back if something goes wrong. This alone makes it worth using.
You have a full and complete documentation of your infrastructure. There is no need to remember who did what on the console when, and exactly how things fit together - it is all described right there in the stack templates.
In case of disaster you can recreate your entire infrastructure with a single command, again without having to remember just exactly how things were set up.
You can easily test changes to your infrastructure by deploying separate stacks, without touching production. Instead of having permanent test and staging environments you can create them automatically whenever you need to.
Developers can work on their own, custom stacks while implementing changes, completely isolated from changes made by others, and from production.
It really is very good, and it gives you both more control, and more freedom to experiment.
First, you seem to underestimate the power of tracking changes in your infrastructure provisioning and configuration in VCS.
Provisioning and editing your infrastructure configuration via web interface is usually very lengthy process. Having the configuration in a file versus having it in multiple web dashboards gives you the much needed perspective and overall glance at what you use and what is it's configuration. Also, when you repeatedly configure similar stacks, you can re-use the code and avoid errors or mistakes.
It's also important to note that AWS CloudFormation resources frequently lag behind development of services available in the AWS Console. CloudFormation also requires gathering some know-how and time getting used to it, but in the end the benefits prevail.
trying to pick version control, continuous integration, and host for Flex + Ruby or Django smallish project. Question:
version control: I've used SVN and CVS in the past. I hear great things about git. Not sure what to pick.
continuous integration: I've heard good things about hudson and cruiseControl. Not sure what to pick
hosting: is my own server the only way to go? Are the decent cloud options that are not too expensive? or should I look for some free hosting service?
thank you for your help!
f
Use Git.
Git is a great tool that allows a very flexible workflow. It has lots of benefits over subversion/cvs, the biggest of which is the ability to branch and merge seamlessly. This can't be overstated. The merge-hell that ensues when attempting to use svn's branching and merging is a thing of the past. For a better case on why to use git, check out http://whygitisbetterthanx.com/
Use Hudson.
Hudson is the easily the best CI tool in the game. The reason Hudson is the best is that its easy to configure (for one or multiple nodes), it has a ton of plugins, and handles the 90% use case extremely well. You are in the 90% use case. People like Mozilla aren't. Check out C. Titus Brown's talk at Pycon for more info. http://pycon.blip.tv/file/3259794/ (If you decide that Hudson isn't what you should use, check out buildbot)
Use Webfaction (or Rackspace Cloud).
Webfaction is a great starter ground. If your needs are low, check them out. Beyond that, I'd suggest taking a hard look at Rackspace Cloud (RSC). RSC makes scaling out much easier and their pricing model is very palatable for things that aren't bandwidth intensive (ie: most things that don't require tons of uploads/downloads). It starts at $10/mo. Their management console is good (save the DNS administration interface, but even that is more than bearable). If your needs expand beyond RSC (doubtful), you would do well to check out Amazon's EC2. Companies like RightScale can help when it comes to scaling out.