EKS has issue connecting Oracle - amazon-web-services

I have Azure devops pipeline which deploys in Amazon k8s, the pipeline is working very fine and with no issue and it is deploying in k8s.
Mine is a Spring Boot application, when starting the application it makes a connection to Oracle db. And I get the below error.
[nio-8203-exec-3] com.zaxxer.hikari.util.DriverDataSource : Registered driver with driverClassName=oracle.jdbc.driver.OracleDriver was not found, trying direct instantiation.
Caused by: oracle.net.ns.NetException: Listener refused the connection with the following error:
ORA-12505, TNS:listener does not currently know of SID given in connect descripto
I suspect this is not the issue , because the same code, the same pipeline, the same k8s is working absolutely fine in different namespace.
The above oracle issue is happening only when the pipeline try to deploy in default namespace.
I tried to telnet to the Oracle server, I get the response just it is that it is not able to connect from default namespace and I don't know why.
Also I have 2 Oracle database one in one prem and other in RDS, both have the same issue in default name space and it was able to connect with no issue from other namespace.
I don't know what is different in default namespace and what to check there.

Related

Cloud Composer worker fails to connect to external database

I am attempting to take my existing cloud composer environment and connect to a remote SQL database (Azure SQL). I've been banging at my head at this for a few days and I'm hoping someone can point out where my problem lies.
Following the documentation found here I've spun up a GKE Service and SQL Proxy workload. I then created a new airflow connection as show here using the full name of the service azure-sqlproxy-service:
I test run one of my DAG tasks and get the following:
Unable to connect: Adaptive Server is unavailable or does not exist
Not sure on the issue I decide to remote directly into one of the workers, whitelist that IP on the remote DB firewall, and try to connect to the server. With no command line MSSQL client installed I launch python on the worker and attempt to connect to the database with the following:
connection = pymssql.connect(host='database.url.net',user='sa',password='password',database='database')
From which I get the same error above with both the Service and the remote IP entered in as host. Even ignoring the service/proxy shouldn't this airflow worker be able to reach the remote database? I can ping websites but checking the remote logs the DB doesn't show any failed logins. With the generic error and not many ideas on what to do next I'm stuck. A few google results have suggested switching libraries but I'm not quite sure how, or if I even need to, within airflow.
What troubleshooting steps could I take next to get at least a single worker communicating to the DB before moving on the the service/proxy?
After much pain I've found that Cloud composer uses ubuntu 1804 which currently breaks pymssql as per here:
https://github.com/pymssql/pymssql/issues/687
I tried downgrading to 2.1.4 to no success. Needing to get this done I've followed the instructions outlined in this post to use pyodbc.
Google Composer- How do I install Microsoft SQL Server ODBC drivers on environments

Port mapping in Windows Server 2016 - Docker

I have been trying to setup Docker in Windows Server 2016 in an AWS instance to run an IIS program.
From this question,
Cannot access an IIS container from browser - Docker, IIS has been setup inside a container and it is accessible from the host without port mapping.
However, if I want to allow other users from the Internet/Intranet to access the website, after Google-ing it, I guess we do need port mapping...
The error I have encountered in port mapping is given in the above question so... I guess using nat is not the correct option. Therefore, my team and I tried to create another network (custom/bridge) following instructions from
https://docs.docker.com/v17.09/engine/userguide/networking/#user-defined-networks
However, we cannot create a network as follows:
; Googled answer:
https://github.com/docker/for-win/issues/1960
My team guessed maybe its because AWS blocked that option, if anyone can confirm me, please do.
Another thing that I notice is: when we create an ECS instance in AWS,
So... only default = NAT network mode is accepted in Windows server?
Our objective: put the container hosted IIS application to Internet/Intranet in Windows Server 2016...
If anyone has any suggestion/advice, please tell me, many thanks.

Cassandra c++ driver keeps attempting reconnection to endpoints that do not exist anymore

I am using Datastax's c++ driver version 2.8.0 for Apache Cassandra inside a kubernetes application. Cassandra is deployed as a 3 node cluster via this Helm chart.
The chart leverages kubernetes' headless services to make the Cassandra endpoints available, so there is an entry in the kubernetes DNS for those endpoints.
I have a c++ app running in a kubernetes pod that interacts with Cassandra, which connects using that DNS entry to resolve the endpoints. The application has a single connection to Cassandra object, following the driver usage guidelines. Connection is initialized at the beginning of the program, and failure to initialize the connection or to execute a query later on will actually fail the program.
Everything is working fine, but cassandra nodes/pods may eventually go down for some reason. When that happens, they're respawned, but get reassigned with a different IP. It seems like the c++ driver is able to get the new endpoints from the DNS without any additional code. However in such a situation the connection is not closed on the client side, and it looks like the previous endpoints remain in the connection pool on some level. This leads to a series of log events similar to the following:
1531920921.161 [WARN] (src/pool.cpp:420:virtual void cass::Pool::on_close(cass::Connection*)): Connection pool was unable to reconnect to host XXX.XXX.XXX.XX because of the following error: Connection timeout
and
1531920921.894 [WARN] (src/pool.cpp:420:virtual void cass::Pool::on_close(cass::Connection*)): Connection pool was unable to reconnect to host XXX.XXX.XXX.XX because of the following error: Connect error 'host is unreachable'
Which pop up every [reconnect timeout]. The more IP reassignments, the more log messages, which as you can guess can get to a pretty large number for long lived applications.
Is there some feature of the driver's API that allows dealing with that? Or a good/recommended way to handle that client side, more generally? One option, external to the driver, could be to reset the connection within the client code, but (although I may be missing out) I fail to see a way to "catch" such events : they only show up in the logs.

Pentaho DI can't connect to AWS Redshift - Amazon Error 100021

Referring to Pentaho's Doc, we should be using RedshiftJDBC4.jar instead of version 4.1. I have downloaded the driver and placed it in the lib/ directory. Relaunched spoon.sh and I noticed it is no longer complaining about not able to find the com.amazon.redshift.jdbc4 class driver as I was using the 4.1 driver earlier. However, it still could not establish the connection.
Error connecting to database [aws_redshift] :
org.pentaho.di.core.exception.KettleDatabaseException: Error occurred
while trying to connect to the database
Error connecting to database: (using class
com.amazon.redshift.jdbc4.Driver) Amazon Error setting
default driver property values.
Can anyone help on this?
On the flip side, I can connect to my endpoint using SQLWorkbench/J, a SQL client tool.
Somehow I managed to get it working. It seems that downloading AWS Redshift drivers version 4, 4.1, or 4.2 and placing them in the lib/ directory did not work for me for each version by choosing Redshift as connection type (in Database Connection setup).
Instead, I chose PostgreSQL using JDBC. In host name field, I included the endpoint WITHOUT port number 5439 at the end. So, the endpoint should just end with ...amazonaws.com. Fill in database name, port number of 5439, and username and password. If this did not work, try downloading the latest PostgreSQL JDBC driver and placing it in lib/ directory and try again.

informatica problem

Facing Error: *cannot start the server 'servername' *
when trying to start the workflow, iam getting the above error. I have configured server settings properly and it resolved as well..
all the informatica services are running fine. what could be the problem.
I suspect this as a wrong assignment of Integration service to workflow.
Make sure you assign a running/working Integration service to the workflow.
To assign a Integration service, Close all folders and select the Integration server from the navigation panel and select the workflow which you are trying to run and apply the settings.
Let me know if this works for you.
which version are you running ?? there could be multiple reasons for this error message, but most probably -
your informatica server configuration is not correct, and therefore the attempt to start the server fails. As a result, when a workflow tried to execute, the attempt to connect to a running server instance fails.
I doubt if all the infa services are running fine.. if actually so, then I would reconfigure my client connection and see if you are actually hitting the right domain/service combination.
I feel there can be multiple reason for this issue
Informatica Integration services is not correctly assinged
Informatica services is not running
wrong connection
I believe informatica integration services related issue will be there.