Amazon Web Service RDS DB Engine Version - amazon-web-services

I'm want to use Amazon Web Service RDS.
So I'm working on process for setting.
But when I choose DB engine version,
there are two type(a, b) about 5.6.19.
Are there difference between two types?
Thank you.

The a and b letter designations appear to indicate that the instance will be running the stated official release version plus bug fixes created or backported or otherwise applied by the RDS developers... So 5.6.19b would be the newer of those two. The issues they address are explained here:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html#MySQL.Concepts.KnownIssuesAndLimitations
However, unless you have a specific reason not to, you should probably use the latest version available.

Related

We're running ElasticSearch 7.8 through AWS OpenSearch with logging turned off. Were we safe from log4shell or not?

ElasticSearch itself should be safe, because of the Java Security Manager settings. We're not using logging anyway, so even if those settings are disturbed, we might not be sending anything to the logger.
But Amazon has still issued a log4j patch for our instance -- after several days now. The patch (R20211203-P2) could just be upgrading to log4j2.15. Or maybe there's some other logger in the control plane we can't see that it is securing?
We have tried requests containing common exploit strings and we do not see any requests coming to our target.
Were we safe before patch R20211203-P2 arrived? Does anyone know what R20211203-P2 actually does? There are no release notes.
Amazon OpenSearch Service has released a critical service software update, R20211203-P2, that contains an updated version of Log4j2 in all regions. We strongly recommend that customers update their OpenSearch clusters to this release as soon as possible.
So yeah I would upgrade ASAP just in case.

Why does http://169.254.269.264/ in an AWS instance have so many versions apart from latest?

Inside an AWS instance, I go to the browser and hit http://169.254.169.254. If am getting some many files and folder kind of thing including dates like of thing and the latest one is also there in that. I want to know is there any specific meaning of that?
From Instance Metadata and User Data - Amazon Elastic Compute Cloud:
The earlier versions are available to you in case you have scripts that rely on the structure and information present in a previous version.

Is credhub available with run.pivotal.io?

If yes then what is the url/api to access it from run.pivotal.io 's trial account.
Also can the features of credhub be tested using pcfdev?
Thanks
Ver 0.28.0 is the latest PCFDev available on PivNet
It is PCF ver 1.11.x. Two versions older. This version does not support CredHub.
If I recall correctly, in ver 1.12 all internal properties are stored in CredHub. That said, PCFDev is not meant for production by any means. So, it may not support CredHub.
To test locally, you can spin up a credhub instance. And use that in your PCF app. To do that you will need VirtualBox and spin up a bosh-lite instance with CredHub.
Take a look at this article on how to set it up.
No, credhub is not supported for application developers on PWS at this time.

What is the difference between Bitnami and Click to deploy on GCE?

Just trying to understand what's the difference between bitnami apps and google 'click-to-deploy' options on Google Cloud Engine?
For example: There is a 'Cassandra' click-to-deploy and there is a Bitnami version of 'Cassandra'
Can anyone tell me how do they compare and what are the differences?
- is one restrictive compared to the other?
- does bitnami version lock you in somehow?
- is there any performance difference (other than obvious performance difference that the hardware change would bring)
Thanks.
Bitnami makes application stacks that run on several cloud platforms including Google Cloud Platform, AWS, Azure and a few others. The Bitnami images you see on Google Cloud Launcher are created by employees of Bitnami and are mostly standard across cloud.
Click to Deploy images are usually created by Google Cloud Platform employees working in conjunction with application vendors.
There are differences in versions here and there related to maintenance, but there isn't any difference in the way they are intended to be used. Some Click to Deploy images will incur higher use charges due to licensing (ie. the Click to Deploy image contains the "Pro" version of a vendor's software), but these are called out during the selection process.
Neither version is intended to lock you into a particular platform, Google or Bitnami, it's just that there is duplication among the applications provided.

Amazon EC2 usable as a VMware testing platform?

We have the need to perform tests on localized platforms that put some burden on our hardware resources because for just a few weeks we might need plenty of servers and clients (Windows 2003 and Windows 2008, Vista, XP, Red Hat, etc) in multiple languages.
We typically have relied on blades with Windows 2003 and VMWare, but sometimes these are overgrown by punctual needs and also have the issue that the acquisition and deployment process is quite slow if the environment needs to grow.
Is Amazon EC2/S3 usable in the following scenario?
Install VMWare (Desktop because we need the ability to have snapshots) on an Amazon AMI.
Load existing VMWare images from S3 and run them on EC2 instances (perhaps 3 or 4 server or client OSes on each EC2 instance.
We are more interested in the ability to very easily start or stop VMware snaphsots for relatively short tests. This is just for testing configurations, not a production environment to actually serve a user workload. The only real user is the tester. These configurations might be required for just a few weeks and then turned off for a few months until the next release requires them again.
Is EC2/S3 a viable alternative for this type of testing purpose?
Do you actually need VMWare, or are you testing software that runs in the VMWare VMs? You might actually need VMWare if you are testing e.g. VMWare deployment policy, or are running code that tests the VMWare APIs. Examples of the latter might be you are testing an application server stack and currently using VMWare to test on many platforms.
If you actually need VMWare, I do not believe that you can install VMWare in EC2. Someone will correct & enlighten me if this is not the case.
If you don't actually need VMWare, you have more options. If you can use one of the zillion public AMIs as a baseline, clone the appropriate AMIs and customize them to suit your needs (save the customized version as a private AMI for your team). Then, you can use as many of them as you like. Perhaps you already have a bunch of VMWare images that you need to use in your testing. In that case, you can migrate your VMWare image to an EC2 AMI as described in various places in Google, for example:
http://thewebfellas.com/blog/2008/9/1/creating-an-new-ec2-ami-from-within-vmware-or-from-vmdk-files
(Apologies to the SO censors for not pasting the entire article here. It's pretty long.) But that's a shortcut; you can always use the documented AMI creation process to convert any machine (VMWare or not) to an AMI. Perform that process for each VMWare VM you have, and you'll be all set. Just keep in mind that when you create an AMI, you have to upload it to S3, and that will take a lot of time for large VMs.
This is a bit of a shameless plug, but we have a new startup that may deal with exactly your problem. Amazon EC2 is excellent for on-demand computing, but is really targeted at just a single user launching production servers. We've extended EC2 to make it a Virtual Lab Management environment, with self-service, policies and VM sharing. You can check it out at http://LabSlice.com and see if it meets your needs.
Amazon provides a solution themselves now: http://aws.typepad.com/aws/2010/12/amazon-vm-import-bring-your-vmware-images-to-the-cloud.html