I have installed informatica version 10.2,everything is running and working fine.
I have created two informatica powercenter repositories name: PRS_3, PRS_F2 from cliet side.
These repostories are present in domain as well in administrative console.
Snap:
I want to know where these two Repos are stored PRS_3, PRS_F2 in physical machine.
I checked in informatica installation in server side, dir but I didnot find these files.are repos stored in client side??
[ ~]$ cd informatica_installation/
[informatica_installation]$ ls
install.sh properties Server SilentInput_upgrade_NewConfig.properties silentinstall.sh upgrade_utils
logs sapsolutions SilentInput_DT.properties SilentInput_upgrade.properties source
Messages saptrans SilentInput.properties silentinstallDT.sh unjar_esd.sh
[informatica_installation]$ cd source/
[source]$ ls
connectors DataTransformation DiskSpaceInfo.properties externaljdbcjars isp java ODBC7.1 plugins server services thirdpartynotice tomcat tools
[ModelRepositoryService]$ ls
activation-1.1.jar lucene-queryparser-4.3.0.jar
avalon-framework-4.1.3.jar lucene-sandbox-4.3.0.jar
com.infa-com.infa.products.platform.modelutil.common-metamodel-10.2.0.9.490-SNAPSHOT.jar lucene-snowball-2.4.1.jar
com.infa-com.infa.products.repository.prs.deployer.isp.service-10.2.0.82.490-SNAPSHOT.jar lucene-spatial-4.3.0.jar
[ ModelRepositoryService]$ cd ..
[ services]$ ls
AdministratorConsole DataArchiveService HumanTaskService ISPPlugins PowerExchange SearchService WebAppApplicationService
AnalystService DataIntegrationService IDDService MetadataManagerService resourcemanager shared WebServiceHub
CatalogService DQContent InfaHadoopService ModelRepositoryService SAPBWService TDMService
ContentManagementService EmailService IntelligentDQService OAuthWebService SchedulerService Tutorial
OS: Linux for server side, Windows for client side.
Let me know the physical location of Repositories created by informatica powercenter Repositories services.
Informatica Power Center(10.x) uses a client-server architecture where metadata is stored in a database and executables exists in physical server(called as Node).
If you want to know where is PRS_3 repository service, go to Admin console > properties. You will see DB info.
Refer to below picture - in your case
Informatica services like Integration service and Repository service- runs in your Linux box where executables exists or wherever you installed infa.
Repo and Domain metadata - exists in a relational DB(Oracle/SQL Server etc.).
PC Clients - they can be installed in Windows machine. They talk to services and services talk to metadata base and completes a request.
Related
I've used 5.11.0 .deb package and installed it using
sudo dpkg -i packagename
Now i can run wso2 IS as a service by running sudo service wso2is-5.11.0 start
But i dont know how to run one more service, preferably on port 9444
Easily,
Download the WSO2 Identity Server from [here][1].
[1]: https://wso2.com/identity-server/#
Extract the file to a dedicated directory. For the purposes of this scenario, this is referred to as <IS_HOME_PRIMARY> in this topic.
Make a copy of this folder in the same location and rename it. For the purposes of this scenario, this is referred to as <IS_HOME_SECONDARY> in this topic.
By default, the HTTPS port of the primary IS instance is 9443. Let this be left as it is.
There are two ways to set an offset to a port:(9444)
option 1: in the <IS_HOME_SECONDARY>/bin Pass the port offset to the server during startup. The following command starts the server with the default port incremented by 1 :
./wso2server.sh -DportOffset=1
option 2: Set the offset value in the <IS_HOME_SECONDARY>/repository/conf/deployment.toml file as follows:
[server]
offset = ""
Change the Offset value to 1. This changes the HTTPS port in the secondary IS instance to 9444
In this option 2, Install and run the two Identity Server instances.
Go to <IS_HOME_PRIMARY>/bin and <IS_HOME_SECONDARY>/bin in your command line and type the following command for each instance.
On Linux/Solaris: sh wso2server.sh
also as #ycr mentioned you can configure the WSO2 identity server as a Linux service https://is.docs.wso2.com/en/5.11.0/setup/installing-as-a-linux-service/#running-the-product-as-a-linux-service
You can't start the same service multiple times with different port offsets. Hence download the product again unzip it and start it with a port offset as described here.
Google Cloud Run allows for using Cloud SQL. But what if you need Cloud SQL when building your container in Google Cloud Build? Is that possible?
Background
I have a Next.js project, that runs in a Container on Google Cloud Run. Pushing my code to Cloud Build (installing the stuff, generating static pages and putting everything in a Container) and deploying to Cloud Run works perfectly. đź‘Ś
Cloud SQL
But, I just added some functionality in which it also needs to some data from my PostgreSQL instance that runs on Google Cloud SQL. This data is used when building the project (generating the static pages).
Locally, on my machine, this works fine as the project can connect to my CloudSQL proxy. While running in CloudRun this should also work, as Cloud Run allows for connecting to my Postgres instance on Cloud SQL.
My problem
When building my project with Cloud Build, I need access to my database to be able to generate my static pages. I am looking for a way to connect my Docker cloud builder to Cloud SQL, perhaps just like Cloud Run (fully managed) provides a mechanism that connects using the Cloud SQL Proxy.
That way I could be connecting to /cloudsql/INSTANCE_CONNECTION_NAME while building my project!
Question
So my question is: How do I connect to my PostgreSQL instance on Google Cloud SQL via the Cloud SQL Proxy while building my project on Google Cloud Build?
Things like my database credentials, etc. already live in Secrets Manager, so I should be able to use those details I guess 🤔
You can use the container that you want (and you need) to generate your static pages, and download cloud sql proxy to open a tunnel with the database
- name: '<YOUR CONTAINER>'
entrypoint: 'sh'
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=<my-project-id:us-central1:myPostgresInstance>=tcp:5432 &
<YOUR SCRIPT>
App engine has an exec wrapper which has the benefit of proxying your Cloud SQL in for you, so I use that to connect to the DB in cloud build (so do some google tutorials).
However, be warned of trouble ahead: Cloud Build runs exclusively* in us-central1 which means it'll be pathologically slow to connect from anywhere else. For one or two operations, I don't care but if you're running a whole suite of integration tests that simply will not work.
Also, you'll need to grant permission for GCB to access GCSQL.
steps:
- id: 'Connect to DB using appengine wrapper to help'
name: gcr.io/google-appengine/exec-wrapper
args:
[
'-i', # The image you want to connect to the db from
'$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME:$SHORT_SHA',
'-s', # The postgres instance
'${PROJECT_ID}:${_POSTGRES_REGION}:${_POSTGRES_INSTANCE_NAME}',
'-e', # Get your secrets here...
'GCLOUD_ENV_SECRET_NAME=${_GCLOUD_ENV_SECRET_NAME}',
'--', # And then the command you want to run, in my case a database migration
'python',
'manage.py',
'migrate',
]
substitutions:
_GCLOUD_ENV_SECRET_NAME: mysecret
_GCR_HOSTNAME: eu.gcr.io
_POSTGRES_INSTANCE_NAME: my-instance
_POSTGRES_REGION: europe-west1
* unless you're willing to pay more and get very stung by Beta software, in which case you can use cloud build workers (at the time of writing are in Beta, anyway... I'll come back and update if they make it into production and fix the issues)
The ENV VARS (including DB connections) are not available during build steps.
However, you can use ENTRYPOINT (of Docker) to run commands when the container runs (after completing the build steps).
I was having the need to run DB migrations when a new build was deployed (i.e. when the container starts running) and using ENTRYPOINT (to a file/command) was able to run migrations (which require DB connection details, not available during the build-process).
"How to" part is pretty brief and is located here : https://stackoverflow.com/a/69088911/867451
Issue
So I have two AWS instances: a Puppet master and a Puppet client. When I run sudo puppet agent --test on my client, the tasks defined in my master's manifest didn't apply to the client instance.
Where I am right now
puppetmaster is installed on the master instance
puppet is installed on client instance
Master just finished signing my client's certificate. No errors were displayed
Master has a /etc/puppet/manifests/site.pp
Client's puppet.conf file has a server=dns_of_master line
My Puppet version is 5.4.0. I'm using the default manifest configuration.
Here's the guide that I'm following: https://www.digitalocean.com/community/tutorials/getting-started-with-puppet-code-manifests-and-modules. The only changes are the site.pp content and that I'm using AWS.
If it helps, here's my AWS instances' AMI: ami-06d51e91cea0dac8d
Details
Here's the content on my master's /etc/puppet/manifests/site.pp:
node default {
package { 'nginx':
ensure => installed
}
service { 'nginx':
ensure => running,
require => Package['nginx']
}
file { '/tmp/hello_world':
ensure => present,
content => 'Hello, World!'
}
}
The file has a permission of 777.
Here's the ouput when I run sudo puppet agent --test. This is after I ran sudo puppet agent --enable:
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for my_client_dns
Info: Applying configuration version '1578968015'
Notice: Applied catalog in 0.02 seconds
I have looked at other StackOverflow posts with this issue. I know that my catalog is not getting applied due to the lack of status messages and the quick time. Unfortunately, the solutions didn't apply to my case:
My site.pp is named correctly and in the correct file path /etc/puppet/manifests
I didn't touch my master's puppet.conf file
I tried restarting the server with sudo systemctl but nothing happens
So I have fixed the issue. The guide that I was following required an older version of Ubuntu (16.4, rather than 18.4 as I'm using). This needs a different AMI than the one that I used to create the instances.
I have installed jenkins on my local machine (on premises). I have my server (Linux) in AWS Cloud. I need to share logs with developers with out giving server access to them. I need to create a jenkins job by running that job they should get the logs from server.
How can i do that ?? If any one following the same process to get the data from cloud please help me in solving this... Thanks in advance.
Use the SSH Agent plugin to securely setup your private key
Use SCP to copy the log files to the local workspace
Archive those files to the Jenkins job
You could write a pipeline script to do this. Something like:
node ("linux") {
sshagent (credentials: ['deploy-dev']) {
sh 'scp user#awshostnamehere:/somepath/somelogfile .'
archive somelogfile
}
}
Note that this requires you to fill in the blanks. To get this to work you would have to:
Setup an SSH private key credential named deploy-dev
Setup a build agent with the label 'linux' or change that to a label of an agent you do have.
I'm using ubuntu 12.04 AMI in EC2 for creating puppet cluster and i'm facing problems while configuring it.
The problem is that the master is not able to recognize the slaves.
Do i need more packages other than mysql
/etc/mysql/my.cnf
what changes do i need in the above file?
Puppet is a configuration management tool that allows automating the process of defining and maintaining consistent state of several developer workstations. It is a descriptive, centralized and client-server based system. The central server is configured and the clients synchronize themselves to it to ensure that all systems end in the described state. For instance, the task of ensuring the same development environment on all developer systems in a project can be easily accomplished using Puppet.
Here is a quick procedure to set up a Puppet server and one Puppet client on Amazon EC2 instance having Ubuntu OS, and also installing Puppet Dashboard on server to view the status of the clients.
Prerequisites
Two ec2 instances set up with Ubuntu ami.
One instance named as puppetserver and other as puppetclient.
Procedure
Puppet server and client set up
Configuring hosts files View the /etc/hostname file on puppetserver and puppetclient. These are the Puppet server and client hostnames respectively
Edit /etc/hosts file on both the systems. Add server and client IPs and corresponding hostnames.
Setting up the Puppet Server
Enabling the Puppet Labs Package repository
Download the "puppetlabs-release" package for the OS (here, Ubuntu 12.04) on Puppet server
Install the package by running
dpkg -i
Run apt-get update to get new list of available packages.
For example, to enable the repository for Ubuntu 12.04, Precise Pangolin:
wget https://apt.puppetlabs.com/puppetlabs-release-precise.deb
sudo dpkg -i puppetlabs-release-precise.deb
sudo apt-get updateInstall Puppet
Install Puppet
Install puppetmaster
sudo apt-get update sudo apt-get install puppetmaster
Setting up the Puppet Client
Install Puppet on the puppet client(s)
sudo apt-get update sudo apt-get install puppet
Specify the Puppet server domain name on the client. To do this, modify the
/etc/puppet/puppet.conf
file and add the line
server=.
The client can now connect to the Puppet master.
Start the Puppet agent service for establishing first communication between server and client.
sudo puppet agent --verbose --no-daemonize --onetime
This starts a connection to the Puppet master process that is listening on port 8140 on the Puppet server. The output will be verbose, and the agent will not continue running in the background as a daemon. Also, it will run only one time, that is, after the connection is closed, the agent process will exit. The output looks like:
The client has made itself known to the server by sending an SSL certificate request. The server needs to certify the client.
To view the list of yet-to-be signed certificates on the server
sudo puppet cert --list
This lists the following
Sign the client node's SSL certificate
sudo puppet cert --sign <puppet client name>
Client can now establish full connection to the server and poll the Puppet master for any configuration updations.
Defining Configurations
We have set up puppet on both Puppet server and client and have also established communication between the two machines. Next step is to define the configuration for the target systems using puppet manifest. These manifests are specified in site.pp file.
As an example, we define a manifest that will create a helloworld.txt file on the client.
Defining manifest
Put the following manifest definition in /etc/puppet/manifests/site.pp file,
node "<puppet client hostname>" { file { "/home/ubuntu/helloworld.txt": content => "This is test content", ensure => file, owner => "ubuntu", group => "ubuntu", mode => 0644 } }
This manifest defines that the puppet client must have a helloworld.txt file
in /home/ubuntu/ folder with content, This is test content.
Getting changes on client
On puppet client, run the following command.
sudo puppet agent -t
The puppet client pulls the manifests defined in the site.pp file on the puppet server. It learned that a file named helloworld.txt with defined specifications, is expected to exist at location /home/ubuntu. Since, no such file exists on the client, the agent takes action and creates the file.
View the 'helloworld.txt' file
To verify that the client exists in a state defined by the Puppet server, run the following command
sudo vi /home/ubuntu/helloworld.txt
The file contents are same as defined in the manifest definition on the server.
Installing Puppet Dashboard
Overview
Puppet Dashboard is a GUI that interfaces with Puppet. It can be used to view and report the status of all the client nodes. Puppet dashboard runs on port 3000 on the puppet server.
Following are the steps for set up
Installing external dependencies
Dashboard is a Ruby on Rails web app and thus requires certain software to be installed
RubyGems
Rake version 0.8.3 or newer
MySQL database server version 5.x
Ruby-MySQL bindings version 2.7.x or 2.8.x
Install the packages
sudo apt-get install -y build-essential irb libmysql-ruby libmysqlclient-dev libopenssl-ruby libreadline-ruby mysql-server rake rdoc ri ruby ruby-dev
Install RubyGems package system
( URL="http://production.cf.rubygems.org/rubygems/rubygems-1.3.7.tgz" PACKAGE=$(echo $URL | sed "s/\.[^\.]*$//; s/^.*\///") cd $(mktemp -d /tmp/install_rubygems.XXXXXXXXXX) && \ wget -c -t10 -T20 -q $URL && \ tar xfz $PACKAGE.tgz && \ cd $PACKAGE && \ sudo ruby setup.rb )
Create gem as an alternative name for gem1.8
sudo update-alternatives --install /usr/bin/gem gem /usr/bin/gem1.8 1
Installing Puppet Dashboard
Install puppet-dashboard from puppetlabs package repository
sudo apt-get update sudo apt-get install puppet-dashboard
Configuring Dashboard
Modify the database.yml file. It can be found at /usr/share/puppet-dashboard/config/database.yml.
Under the key-value pairs for production environment, the database value 'dashboard_production' specifies the dashboard database name, and username value 'dashboard' specifies the user for this database. In the next step, we will create both the database and the user. password value is the password for MySQL.
Creating and Configuring MySQL database
Create the user and database for puppet-dashboard. Navigate to MySQL command line
CREATE DATABASE dashboard_production CHARACTER SET utf8; CREATE USER 'dashboard'#'localhost' IDENTIFIED BY 'my_password'; GRANT ALL PRIVILEGES ON dashboard_production.* TO 'dashboard'#'localhost';
Configure MySQL's maximum packet size to permit larger rows in database
set global max_allowed_packet = 33554432;
Also modify the mysql configuration file /etc/mysql/my.cnf
Allowing 32MB allows an occasional 17MB row with plenty of spare room
max_allowed_packet = 32M
To create dashboard tables, run the following command in the puppet-dashboard folder
cd /usr/share/puppet-dashboard rake RAILS_ENV=production db:migrate
Testing that Dashboard is working
Start the dashboard using Ruby’s built-in WEBrick server
cd /usr/share/puppet-dashboard
sudo ./script/server -e production
Dashboard instance starts on port 3000 using the “production” environment. Dashboard’s UI can be viewed at :3000
Configure puppet
Both the puppet server and client need to be configured for the dashboard to receive reports.
Configure agent nodes to submit reports to master by turning their reporting ON.
puppet.conf (on each agent)
[agent]
report = true
Configure the server. Add the http report handler to puppet server's reports setting and set reporturl to Dashboard instance’s reports/upload URL
puppet.conf (on puppet master)
[master]
reports = store, http
reporturl = http://<server hostname>:3000/reports/upload
For enabling dashboard's external node classifier(ENC),
puppet.conf (on puppet master)
[master]
node_terminus = exec
external_nodes = /usr/bin/env PUPPET_DASHBOARD_URL=http://<server hostname>:3000 /usr/share/puppet-dashboard/bin/external_node
Testing Puppet's connection to Dashboard
Restart the puppet master
Run one of the puppet agents to test the configurations
sudo puppet agent -t
The output will be:
This means that the report has arrived. To process it, we will activate the delayed_job workers.
Starting delayed_job workers
Run the following command
cd /usr/share/puppet-dashboard
sudo env RAILS_ENV=production script/delayed_job -p dashboard -n 1 -m start
This starts the delayed_job workers, and completes the pending task.
Thus, puppet is now installed on two EC2 instances, out of which one is server and the other is client. Also, puppet-dashboard is installed to view the status of the client nodes.