Pivotal cloud foundry RedisConnectionFactory - cloud-foundry

Currently I'm using Redis that is provided by PCF. I'm connecting to it using JedisConnectionFactory from spring-data-redis providing needed configs like this:
#Configuration
public class RedisConfig {
#Bean
public JedisConnectionFactory jedisConnectionFactory() {
final JedisConnectionFactory jedisConFactory = new JedisConnectionFactory();
jedisConFactory.setHostName("pivotal-redis-host");
jedisConFactory.setPort(1234);
jedisConFactory.setPassword("mySecretPassword");
return jedisConFactory;
}
}
spring-cloud-config provides AbstractCloudConfig class that can be used to configure various connections. Is there any noticeable benefits one must use it instead of JedisConnectionFactory? Looks like less configs is needed to be provided, but is there any other reason?
public class RedisCloudConfig extends AbstractCloudConfig {
#Bean
public RedisConnectionFactory redisConnection() {
return connectionFactory().redisConnectionFactory();
}
}
Thanks in advance.

The main difference with Spring Cloud Connectors is that it's reading the service information from the Redis service that you bound to your application on Cloud Foundry. It then automatically configures the Redis connection based on that dynamically bound information.
Your example of using JedisConnectionFactory as well as #avhi's solution are placing the configuration information directly into either your source code or application configuration files. In this case, if your service changes then you'd need to reconfigure your app and run cf push again.
With Spring Cloud Connectors, you can change services by simply unbinding and binding a new Redis service through CF, and running cf restart.

In my opinion even you don't need to define #Bean configuration specifically.
You can simply use auto configuration by providing Redis server details in application.yml or application.properties simply.
spring:
redis:
host: pivotal-redis-host
port: 1234
password: mySecretPassword

Related

Example of embedded Jetty and using Micrometer for stats (without Spring)

I am new to using Micrometer as a metrics/stats producer and I am having a hard time in getting it configured correctly with my Jersey/Embedded Jetty server. I would like to get Jetty statistics added.
I already have the servlet producing stats for the JVM in a Prometheus format.
Does anyone know of a good working example on how to configure it?
I am not using SpringBoot.
The best way is to look at the Spring Boot code. For example it binds the jetty connections
JettyConnectionMetrics.addToAllConnectors(server, this.meterRegistry, this.tags);
And it uses an ApplicationStartedEvent to find the server reference.
private Server findServer(ApplicationContext applicationContext) {
if (applicationContext instanceof WebServerApplicationContext) {
WebServer webServer = ((WebServerApplicationContext) applicationContext).getWebServer();
if (webServer instanceof JettyWebServer) {
return ((JettyWebServer) webServer).getServer();
}
}
return null;
}
There are other classes that record the thread usage and SSL handshake metrics.

AWS Airflow v2.0.2 doesn't show Google Cloud connection type

I want to load data from Google Storage to S3
To do this I want to use GoogleCloudStorageToS3Operator, which requires gcp_conn_id
So, I need to set up Google Cloud connection type
To do this, I added
apache-airflow[google]==2.0.2
to requirements.txt
but Google Cloud connection type is still not in Dropdown list of connections in MWAA
Same approach works well with mwaa local runner
https://github.com/aws/aws-mwaa-local-runner
I guess it does not work in MWAA because of security reasons discussed here
https://lists.apache.org/thread.html/r67dca5845c48cec4c0b3c34c3584f7c759a0b010172b94d75b3188a3%40%3Cdev.airflow.apache.org%3E
But still, is there any workaround to add Google Cloud connection type in MWAA?
Connections can be created and managed using either the UI or environment variables.
To my understanding the limitation that MWAA have over installation of some provider packages are only on the web server machine which is why the connections are not listed on the UI. This doesn't mean you can't create the connection at all, it just means you can't do it from the UI.
You can define it from CLI:
airflow connections add [-h] [--conn-description CONN_DESCRIPTION]
[--conn-extra CONN_EXTRA] [--conn-host CONN_HOST]
[--conn-login CONN_LOGIN]
[--conn-password CONN_PASSWORD]
[--conn-port CONN_PORT] [--conn-schema CONN_SCHEMA]
[--conn-type CONN_TYPE] [--conn-uri CONN_URI]
conn_id
You can also generate a connection URI to make it easier to set.
Connections can also be set as environment variable. Example:
export AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT='google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fkeys%2Fkey.json&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform&extra__google_cloud_platform__project=airflow&extra__google_cloud_platform__num_retries=5'
If needed you can check the google provider package docs to review the configuration options of the connection.
For MWAA there are 2 options to set connection:
Setting environment variable.
Using pattern AIRFLOW_CONN_YOUR_CONNECTION_NAME,
where e.g. YOUR_CONNECTION_NAME = GOOGLE_CLOUD_DEFAULT.
That can be done using custom plugin
https://docs.aws.amazon.com/mwaa/latest/userguide/samples-env-variables.html
Using secret manager
https://docs.aws.amazon.com/mwaa/latest/userguide/connections-secrets-manager.html
Tested for google cloud connection, both are working.
I asked AWS support about this issue. Looks like they are working on it.
They told me a way to configure the the google cloud platform connection passing a json object in the extras with Conn Type as HTTP. And it works.
I have validated editing google_cloud_default (Airflow > Admin > Connections)
Conn Type: HTTP
Extra:
{
"extra__google_cloud_platform__project":"<YOUR_VALUE>",
"extra__google_cloud_platform__key_path":"",
"extra__google_cloud_platform__keyfile_dict":"{"type": "service_account","project_id": "<YOUR_VALUE>","private_key_id": "<YOUR_VALUE>", "private_key": "-----BEGIN PRIVATE KEY-----\n<YOUR_VALUE>\n-----END PRIVATE KEY-----\n", "client_email": "<YOUR_VALUE>", "client_id": "<YOUR_VALUE>", "auth_uri": "https://<YOUR_VALUE>", "token_uri": "https://<YOUR_VALUE>", "auth_provider_x509_cert_url": "https://<YOUR_VALUE>", "client_x509_cert_url": "https://<YOUR_VALUE>"}",
"extra__google_cloud_platform__scope":"",
"extra__google_cloud_platform__num_retries":"5"
}
airflow conn screenshot
!! You must escape the " and /n in extra__google_cloud_platform__keyfile_dict !!
In requirements.txt I used:
apache-airflow[gcp]==2.0.2
(I believe apache-airflow[google]==2.0.2 should work as well)

Select the service you wish to carry out a Google Task Handler

I am relatively new to Google Cloud Platform, and I am able to create app services, and manage databases. I am attempting to create a handler within Google Cloud Tasks (similar to the NodeJS sample found in this documentation.
However, the documentation fails to clearly address how to connect the deployed service with what is requesting. Necessity requires that I have more than one service in my project (one in Node for managing rest, and another in Python for managing geospatial data as asynchronous tasks).
My question: When running multiple services, how does Google Cloud Tasks know which service to direct the task towards?
Screenshot below as proof that I am able to request tasks to a queue.
When using App Engine routing for your tasks it will route it to the "default" service. However, you can overwrite this by defining AppEngineRouting, select your service, instance and version, the AppEngineHttpRequest field.
The sample shows a task routed to the default service's /log_payload endpoint.
const task = {
appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri: '/log_payload',
},
};
You can update this to:
const task = {
appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri: '/log_payload',
appEngineRouting: {
service: 'non-default-service'
}
},
};
Learn more about configuring routes.
I wonder which "services" you are talking about, because it always is the current service. These HTTP requests are basically being dispatched by HTTP headers HTTP_X_APPENGINE_QUEUENAME and HTTP_X_APPENGINE_TASKNAME... as you have them in the screenshot with sample-tasks and some random numbers. If you want to task other services, these will have to have their own task queue(s).

How to set Spring Boot RabbitMQ Heartbeat on Cloud Foundry?

I have an application running on Cloud Foundry with Spring Boot (1.5.12) and spring-boot-starter-amqp
Based on the previous SO answer to set heartbeat property on rabbitmq autoconfig connectionfactory bean, I tried setting the heart beat property as follows.
cf set-env app spring.rabbitmq.requested-heartbeat 30
cf restage app
However, when viewed through the Rabbit Management Console, the connection still indicates the heart beat is at the default of 60s.
I took a heap dump using the actuator endpoints, and took a look at the connectionFactory that seemed to have been auto-reconfigured by spring-cloud-spring-service-connector. It seems to have the default 60 seconds, and ignores the 30 seconds requested.
Is there another environment property that should be used to configure the heartbeat value ? If not, I suspect we will wire the CachingConnectionFactory and modify it in there.
If the connection is created by Spring Cloud Connectors (i.e. spring-cloud-spring-service-connector), then you will need to customize the connection with Java configuration.
#Configuration
class CloudConfig extends AbstractCloudConfig {
#Bean
public RabbitConnectionFactory rabbitFactory() {
Map<String, Object> properties = new HashMap<String, Object>();
properties.put("requestedHeartbeat", 30);
RabbitConnectionFactoryConfig rabbitConfig = new
RabbitConnectionFactoryConfig(properties);
return connectionFactory().rabbitConnectionFactory(rabbitConfig);
}
}
More detail is available in the Connectors docs.

Neo4jServer in Neo4jConfiguration - 4.1.0?

I've been using the latest code in 4.1.0-BUILD-SNAPSHOT as I need some of the new bug fixes in the 4.1 branch and just noticed that "neo4jServer()" is no longer a method exposed by Neo4jConfiguration. What is the new way to initialize a server connection and an in-memory version for unit tests? Before I was using "RemoteServer" and "InProcessServer", respectively.
Please note, the official documentation will be updated shortly.
In the meantime:
What's changed
SDN 4.1 uses the new Neo4j OGM 2.0 libraries. OGM 2.0 introduces API changes, largely due to the addition of support for Embedded as well as Remote Neo4j. Consequently, connection to a production database is now accomplished using an appropriate Driver, rather than using the RemoteServer or the InProcessServer which are deprecated.
For testing, we recommend using the EmbeddedDriver. It is still possible to create an in-memory test server, but that is not covered in this answer.
Available Drivers
The following Driver implementations are currently provided
http : org.neo4j.drivers.http.driver.HttpDriver
embedded : org.neo4j.drivers.embedded.driver.EmbeddedDriver
A driver implementation for the Bolt protocol (Neo4j 3.0) will be available soon.
Configuring a driver
There are two ways to configure a driver - using a properties file or via Java configuration. Variations on these themes exist (particularly for passing credentials), but for now the following should get you going:
Configuring the Http Driver
The Http Driver connects to and communicates with a Neo4j server over Http. An Http Driver must be used if your application is running in client-server mode. Please note the Http Driver will attempt to connect to a server running in a separate process. It can't be used for spinning up an in-process server.
Properties file configuration:
The advantage of using a properties file is that it requires no changes to your Spring configuration.
Create a file called ogm.properties somewhere on your classpath. It should contain the following entries:
driver=org.neo4j.ogm.drivers.http.driver.HttpDriver
URI=http://user:password#localhost:7474
Java configuration:
The simplest way to configure the Driver is to create a Configuration bean and pass it as the first argument to the SessionFactory constructor in your Spring configuration:
import org.neo4j.ogm.config.Configuration;
...
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.http.driver.HttpDriver")
.setURI("http://user:password#localhost:7474");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), <packages> );
}
Configuring the Embedded Driver
The Embedded Driver connects directly to the Neo4j database engine. There is no server involved, therefore no network overhead between your application code and the database. You should use the Embedded driver if you don't want to use a client-server model, or if your application is running as a Neo4j Unmanaged Extension.
You can specify a permanent data store location to provide durability of your data after your application shuts down, or you can use an impermanent data store, which will only exist while your application is running (ideal for testing).
Create a file called ogm.properties somewhere on your classpath. It should contain the following entries:
Properties file configuration (permanent data store)
driver=org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver
URI=file:///var/tmp/graph.db
Properties file configuration (impermanent data store)
driver=org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver
To use an impermanent data store, simply omit the URI property.
Java Configuration
The same technique is used for configuring the Embedded driver as for the Http Driver. Set up a Configuration bean and pass it as the first argument to the SessionFactory constructor:
import org.neo4j.ogm.config.Configuration;
...
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver")
.setURI("file:///var/tmp/graph.db");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), <packages> );
}
If you want to use an impermanent data store (e.g. for testing) do not set the URI attribute on the Configuration:
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver")
return config;
}