Using ELK service from Swisscom cloud from outside - cloud-foundry

We'd like to use the ELK service provided by the Swisscom cloud. Because the applications we want to log are not hosted with Swisscom, but externally, we'd like to connect to the ELK service from outside. Is this possible at all? Or is the ELK service only available to Apps hosted in the Swisscom cloud?

When you create and bind ELK service you will receive connection string and credentials like this
$ cf env $APP
Getting env variables for app $APP in org $ORC / space $SPACE as $USER...
OK
System-Provided:
{
"VCAP_SERVICES": {
"elk": [
{
"credentials": {
"elasticSearchHost": "9zz2ulprvgzlepa5.service.consul",
"elasticSearchPassword": "$PASSWORD",
"elasticSearchPort": 48783,
"elasticSearchUsername": "$USERNAME",
"kibanaPassword": "$PASSWORD",
"kibanaUrl": "http://xjcv9zh0jer2s44q.service.consul:59664",
"kibanaUsername": "$USERNAME",
"logstashHost": "gew5qn71sxcz49gd.service.consul",
"logstashPort": 46611,
"syslog": "syslog://uew5qn71sxcz49gd.service.consul:46611"
},
"label": "elk",
"name": "example-so",
"plan": "small",
"provider": null,
"syslog_drain_url": "syslog://gew5qn71sxcz49gd.service.consul:46611",
"tags": []
}
],
You can't reach the addresses *.service.consul from outside (DNS is only available in Swisscom Cloud). You can only reach those addresses from your App (running in Cloud Foundry container).
There is a workaround, but I recommend only for development purposes.
You can create from your local desktop a tunnel to Elasticsearch or Kibana web interface.
See Administrating Service Instances with Service Connector. This is a CF CLI plugin developed by Swisscom.
After creating a service instance, you’ll eventually need to
administrate the service. For example you might need to create data
tables in a database or backup/restore your data. For these use cases,
we created the Cloud Foundry CLI Plugin Service Connector which is a
local proxy app through which you can connect to your service
instances using your preferred locally installed tools.
example for Kibana web interface.
cf service-connector 80 xjcv9zh0jer2s44q.service.consul:59664
you can also access Elasticsearch from your desktop and use API for inserting or query documents.

The ELK stack has three components:
Elastic Search - storage, index
Logstash - receive and process log messages (like syslog, JSON, text)
Kibana - Web UI to search and visualize
Like written by #Fydor, you cannot reach ELK's service endpoints from the outside. This is also an issue, if you want to access the logs of you CF hosted apps. You do not always want to have to use Swisscom's service connector to access Kibana.
Thus normally, you deploy a small proxy application. Swisscom has a sample for that one.
Alternatively there is the possiblity to use a proxy app like the
Swisscom Kibana Proxy to make your Kibana dashboard publicly
available.
As Elastic Search uses a REST interface, you can use the proxy to publish the Elastic Search endpoint. Eventually, you should also take the chance, to put some security measures into the proxy app.
There are already many logging frameworks, which directly support forwarding to Elastic Search.
If you need to integrate into existing logging solutions (like Syslog, text logs, ...) then you might want to use logstash.
As Cloud Foundry currently supports only publishing HTTP and HTTPS endpoints, you cannot use Swisscom's provided instance for that, but must deploy your own instance and configure this to use your published Elastic Search endpoint.

Related

Google Cloud (API GATEWAY) Custom Domain

I am currently building a rest api, for this I am using Google Cloud API Gateway and Google Cloud Run. I've been looking at all the google cloud documentation and researching elsewhere and I can't find how to add a custom domain to an API gateway instance. The funny thing is that there is more documentation for Google Cloud endpoints, I could find how to do it with endpoints but it does not apply to my use case.
I have 10 instances of google cloud run each one running a microservice respectively and I want to join everything in a single domain and add support with openapi, but I have failed in the attempt.
In any case, if someone has managed to customize the domain of an api gateway instance, I would appreciate if you could guide me, greetings.
For the beta release, custom domain names are not supported on GCP for API Gateway. Since it is still beta as of today, if you want to use a custom domain, you could use Cloud Endpoints in Cloud Run or you could even look into using Microservices in App Engine.

AWS choices for single microservice (spring boot) for Angular 7 client

I'm relatively new to AWS and wanted suggestions about the best options for my needs. I have a single spring boot API that is to be accessible only to my angular 7 client. The client will go in an S3 bucket. I need suggestions for how to host the API (it needs a MySQL autogenerated db).
So far I have seen ECS vs Elastic Beanstalk vs. Amplify. Can someone experienced suggest me an option that won't be overkill for this small project? The API could be called frequently depending on traffic to the client.
If you have suggestions from Azure or Google Cloud Platform those would be welcome too.
Thank you!
AWS provides the EC2 service where you can create an instance (virtual machine) and install/deploy manually your application and all the required software. For personal or very small projects this can be an option, but you should consider that your backend will not be able to scale to more instances automatically (or by configuration), you will have to take care of the configuration and backups of your database, etc.
For production-grade applications there are a lot of advantages of separating your application components, using a specific service for each component.
Given your application stack, I would recommend considering this approach:
Create a relational DB with AWS RDS
Deploy your Spring backend to AWS Beanstalk
Deploy your Angular frontend to AWS S3 (it can be served as static content)
Create a CloudFront distribution with two origins, to route the requests that must be delivered to backend (usually using a URL convention like /api/) and frontend ()

AWS products and services naming nomenclature starting with 'Amazon' vs 'AWS'

Just curious to understand if there are any logical reasoning behind in naming AWS products and services. For example, it has been named as AWS Lambda and not Amazon Lambda & it is Amazon S3 and not AWS S3.
If you hover over the Products menu in AWS homepage, you can see list of all products and services at a glance prefixed with both 'Amazon' and 'AWS'.
Managed to find an answer on naming analogy for AWS products and services from another similar question posted here. Response provided by a Senior Technical Trainer working at Amazon Web Services.
The pattern is that utility services are prefixed with AWS, while
standalone services are prefixed by "Amazon".
Services prefixed with AWS typically use other services, for example:
• AWS Elastic Beanstalk, AWS OpsWorks and AWS CloudFormation launch
other services
• AWS Lambda is triggered by other services
• AWS Data Pipeline moves data between other services
• AWS CloudFormation launches
other services
The AWS documentation page is a great reference for
determining the official name of a service.
As far as I understand, the prefix AWS is used for PaaS ( Platform as a Service) and prefix Amazon is used for IaaS (Infrastructure as a Service). The term AWS(Amazon Web Service) is used whenever it is offered in terms of service/platform, where as Amazon is used whenever a hardware resource/infrastructure is provided.
For example: In the product page of AWS site, in compute category the Amazon EC2 is IaaS providing compute capacity where as AWS Elastic BeanStalk is PaaS which is a platform for deploying web services and web-apps/wesites, likewise AWS Lambda is PaaS for server-less computing which lets us run code without provisioning or managing servers. Similarly in Storage category Amazon S3 is an IaaS which provides storage capabilities where as AWS Snownball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud,which is kind of PaaS.
Although this is just a logical assumption, as we never really know about how Amazon has named it's products and services. So please forgive if there are difference of opinions regarding this.
In one of AWS Meetups it was told that Amazon itself uses few of its cloud services and these are named with 'Amazon' prefix.
I am not sure how much of this is true..
Web Service definition (wiki):
A web service (WS) is either:
a service offered by an electronic device to another electronic device, communicating with each other via the Internet, or
a server (n.e. an Operating System Service) running on a computer device, listening for requests at a particular port over a network, serving web documents (HTML, JSON, XML, images).
Context: Web Service, initially designed as a replacement for Remote Procedure Call (RPC) was a revolutionary idea during the Internet Boom based mainly on XML. Amazon's philosophy was to manage all the ERP and Customer request using IT (Web Services) instead of traditional paper based processes (or RPC or not automated tools). The same approach was then applied from books to compute resources (that's how S3 and EC2 products came to be).
Any service designed to be used by the customer mainly through an API (or Web Service - today it will be called API first product ) it is part AWS collection of services, and when the service is seen as a traditional product (like replacement of a service that you would install on your desktop or use it from Cloud, mainly through an UI) is part of Amazon collection of services. Today we can see exceptions to this rule. Initially this was the thought of Jeff Bezos. To understand more about his philosophy, read: The Secret of Amazon success internal APIs:
Think about what Bezos was asking! Every team within Amazon had to interact using Web Services.
Anyone who doesn’t do this will be fired. Thank you; have a nice day!

AWS offerings for monitoring EC2 Tomcat Web Application

I have a java web application running on Tomcat deployed on an EC2 instance. Is there any way I can monitor/set alarms for when the web application goes down or stops responding? Essentially what I would like to do is to check if a HTTP request to the web app responds with status 200. If it does not respond with 200 (for a few times) then it should raise an alarm and send an e-mail to some ops people.
I know there are third party options like Nagois / uptimerobot that I could use but I wanted to know if there are any AWS offerings for this? Is it possible to set up such automated monitoring using AWS Cloud Watch? I could not find a way to do this based on what I read up about Cloud Watch. If this isn't the sort of thing Cloud Watch can handle, then is there another AWS service suited for this?
I think Port Monitoring Feature is available under AWS Beanstalk.
You can consider checking this http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.healthstatus.html
Ashutosh,
Ec2 is an IAAS service from AWS and you will not have an AWS offering to monitor your Tomcat server. However, you have custom-built solutions, which I think you are not looking for here.
However, if you are using an Application Load balancer or Beanstalk you get options to trigger alarms.
Yes , you can achieve it through a cloudwatch . collect your logs with a cloudwatch agent and upload it on cloudwatch logstream. below is the reference url for configuring cloudwatch agent.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
After that with "create matrix filter" you can set up an email trigger as per your requirements.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringPolicyE
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Counting404Responses.htmlxamples.html

Windows Azure Cloud Service - entry point

I have created a "Windows Azure Cloud Service" project in VS2012.2 with a MVC4 web role. When I run up the project it just gives me a web page. I am trying to develop a web service back-end for my website so I want to be able to call web methods directly from my website which is also running in on Azure.
When F5 my project it just gives me a website. Should I be using a worker role instead of a web role?
If you put your back-end web service along with your web role, then you can use it directly.
If you put your service in a worker role, then you need to open an input endpoint on your worker role so that it can be connected from out side of azure.
Or, you can create another website for your service and map to a virtual dictionary/application on your web role.
You may use both, but a web role is easiest as it sets up everything for you.
Its better to use web role in your scenario, the reason being publishing a web role is pretty straight forward.