How can i view the path where the control file gets generated,because ctl file is not getting generated for one of my mload connection for teradata. How can i change it?
However, in the session log i can see it in some /app/.... directory.Does it get copied from that path to my unix mount point?
I am using teradata mload loader connection.
Regards,
Amit
You could check powercenter integration service process properties (On Informatica's Adminconsole, go to that particular integration service > process > here all the log specific location variables are set) or, you can check the session properties of your loader in powercenter client. If it still doesn't help you out, elaborating more on your question might fetch you the concrete answer.
Related
I'm creating a Django web application that'll assist in barebone server deployments, where a bare bone server will PXE boot to a custom LiveCD to send a cURL command to register itself to a DRF REST API.
When Django receives the POST request it'll start a Go app remotely that'll find the bare bone server based on entries in the REST API then start configuring the server. What would be the best way to identify/introduce the bare bone server to my Go server?
My thought is either to use a parser parameter to identify the server then Go will pull the bare bone server info from the REST API or add a Boolean field in the REST API and the Go app will look for entries that are TRUE then flip it to FALSE when it starts setting up the bare bone server.
Would that be the best way to get this done or is there a better way?
Actually, PXELINUX comes with an identification mechanism based on the systems MAC dress and the configuration can be customized accordingly. Since you need to do accounting of your bare metal servers anyway (port security anyone? ;) ), you should know the MAC dresses of all the interfaces on your bare metal servers anyway.
Your directory usually looks like this (path prefix may be different).
/srv/pxe/pxelinux.cfg/default
Now what happens is that your system starts up, sends a DHCP Request and gets an offer containing the DHCP options "next-server" and "filename". When the system selects said offer, it will connect to the "next-server" and request "filename", usually pxelinux.0. Here is your first potential hook: Write a tftp server which deals with the request and registers your system.
Now pxelinux.0 is executed , it will read the above config file. But here is the thing: Say the Mac address of the system is 23:67:33:5a:cc:e8, and the file
/srv/pxe/pxelinux.cfg/23-67-33-5a-cc-e8
exists, this will be read instead. Which is your second hook: the request will be logged by tftp.
Regardless of wether the default or a system specific config file is used, basically we are talking of GRUB config file. Assuming you use Kickstart to install the system, it will look something like this
default linux
prompt 0
timeout 1
label linux
kernel /images/yourdistro/vmlinuz
ipappend 2
append initrd=/images/yourdistro/initrd.img console=ttyS0,115200
Now, here is the thing: you have several possibilities to execute a custom program on boot:
Append the path to your executable to the append parameter. By convention, the Kernel will send all parameters it does not know to pid 1. Though I have not tested wether systemd adheres to the convention and simply executes a parameter it does not know in turn, I assume as much.
cron. Most cron implementations nowadays support the #boot time definition.
the init system, be it either systemd or openrc or good ol' SYSV init.
Last but not least, how to configure the machine. I strongly suggest against reinventing the wheel. I had quite similar requirements in a (closed source) project. We used kickstart to do the basic system installation and simply shot a curl command after reboot to Ansible Tower, triggering the more detailed configuration. Since we had a DHCP server with the MAC, an IP reserved for said MAC and a hostname readily configured (dnsmasq, caugh, caugh), that was not much of a problem. Basically, all we had to do manually is to register the MAC address and assign an IP and a hostname, then fire up the machine.
I have a REST service with a simple get and post method in Java EE. The post method saves the recieved json in a file using Gson and FileWriter. On my local system the file is saved in C:\Users...\Documents\Glassfish Domains\Domain\config. The Get method reads this file and gives out the json.
When I test this on my local system using Postman, everything works fine, but when I deploy the project on a Ubuntu Server vm with Glassfish installed, I am able to connect but I get a http 500 Internal Server Error Code. I managed to find out, that the errors are thrown, when the FileReader/FileWriter tries to do stuff. I suppose that it is restricted to access this directory on a real glassfish instance.
So my question is, if there is a file path where I am allowed to write a file and read it afterwards. This file has to stay there (at least during the applicationr runs) and has to be the same for every request (A scheduler writes some stuff into the file every 24 hours). If anyone has a simple alternative how to save the json in Java EE without an extra database instance, that would be helpful, too :)
If you have access to the server then you can create a directory using the glassfish server user. Configure this path in some property file in your application and then use this property for reading and writing the file. This way you can configure different directory paths in different environments.
In order to keep my VPN connection active, i wrote this little applescript:
tell application "System Events"
tell network preferences
connect service "VPNServiceNameIConfigured"
end tell
end tell
This script works fine!
I wrote myself a lauchdeamon .plist to call the script on StartUp, WakeUp and every 5 seconds. This means, that every time my vpnconnection breaks, it will be automatically reconnected (if possible) every 5 seconds.
This part works fine but i want to improve it a little. I want to use a if-case like
if network preferences service "VPNServiceNameIConfigured" is not connected
connect it
else do nothing
Is there a way to do that? If so i am very happy about an example or good documentation using applescript for handling system events.
Thank you!
The place to look for that information is in the Dictionary for System Events. You can open any dictionary using “Open Dictionary…" in AppleScript Editor’s File menu.
You don’t give enough information to write exact code; for example, does your VPNServiceNameIConfigured service contain any configurations?
If you can get a configuration, you should be able to check the “connected” of that configuration. Something like:
if connected of current configuration of service VPNServiceNameIConfigured is false then
connect it
end if
Depending on your setup, you might also be able to check the “active” boolean of service VPNServiceNameIConfigured. Here’s a simple test script that works on my setup to check that my WiFi is active:
tell application "System Events"
tell network preferences
set myConnection to location "Automatic"
--get name of every service of myConnection
set myService to service "Wi-FI" of myConnection
--get properties of myConnection
if active of myService is false then
display dialog "Need to reconnect"
end if
end tell
end tell
The “connected” boolean is only available on a configuration, however, and that may be your more reliable option, if your service contains a configuration.
I am trying to set up Geoserver as a backend to our MVC app. Geoserver works great...except it only lets me do one thing at a time. If I am processing a shapefile, the REST interface and GUI lock up until the job is done processing.
I know that there is the option to Cluster a geoserver configuration, but that would only be load balancing, so instead of only one read/write operation, I would have two instead...but we need to scale this up to at least 20 concurrent tasks at one time.
All of the references I've seen on the internet talk about locking down the number of concurrent connections, but only 1 is allowed the whole time.
Obviously GeoServer is used in production environments that have more than 1 request at the same time. I am just stumped about how to make it happen.
A few weeks ago, my colleague sent this email to the Geoserver Development team, the problem was described as a configuration lock...and that by changing a variable we could release it. The only place I saw this variable was in the source code on GitHub.
Is there a way to specify in one of the config files of Geoserver to turn these locks off so I can do concurrent read/writes? If anybody out there has encountered this before PLEASE HELP!!! Thanks!
On Fri, May 16, 2014 at 7:34 PM, Sean Winstead wrote:
Hi,
We are using GeoServer 2.5 RC2. When uploading a shape file via the REST
API, the server does not respond to other requests until after the shape
file has been processed.
For example, if I start a file upload and then click on the Layers menu
item in the web app, the response for the Layers page is not received until
after the file upload and processing have completed.
I researched the issue but did not find a suitable cause/answer. I did
install the control flow extension and created an controlflow.properties
file in the data directory, but this did not appear to have any effect.
How do I diagnose the cause of this behavior?
Simple, it's the configuration lock. Our configuration subsystem is not
able to handle correct concurrent writes,
or reads during writes, so there is a whole instance read/write lock that
is taken every time you use the rest
api and the user interface, nothing can be done while the lock is in place
If you want, you can disable it using the system variable
GeoServerConfigurationLock.enabled,
-DGeoServerConfigurationLock.enabled=true
but of course we cannot predict what will happen to the configuration if
you do that.
Cheers
Andrea
-DGeoServerConfigurationLock.enabled=true is referring to a startup parameter given to the java command when GeoServer is first started. Looking at GeoServer's bin/startup.sh and bin\startup.bat the approved way to do this is via an environment variable named JAVA_OPTS. You will see lines like
if [ -z "$JAVA_OPTS" ]; then
export JAVA_OPTS="-XX:MaxPermSize=128m"
fi
in startup.sh and
if "%JAVA_OPTS%" == "" (set JAVA_OPTS=-XX:MaxPermSize=128m)
in startup.bat. You will need to make those
... JAVA_OPTS="-DGeoServerConfigurationLock.enabled=true -XX:MaxPermSize=128m"
or define that JAVA_OPTS environment variable similarly before GeoServer is started.
The development team's response of "of course we cannot predict what will happen to the configuration if you do that", however, suggests to me that there may be concurrency issues lurking; which may be likely to surface more frequently as you scale up. Maybe you want to think about disconnecting the backend processing of those shape files from the REST requests to do so using some queueing mechanism instead of disabling GeoServer's configuration lock.
Thank You, I figured it out. We didn't even need to do this because we were only using one login for the REST interface (admin) instead of making a new user for each repository, now the locking issue doesn't happen.
A vendor's remote system has data that one of our internal systems needs daily. Our system currently receives the data daily by the vendor's system pushing a CSV file via SFTP. The data is < 1KB in size.
We are considering using a pull via SFTP instead. The file "should" always be ready no later than a defined time (5 ET). So, one problem with this approach could be that our system may have to do some polling to eventually get the file.
How should a system get data from a remote third party data source? The vendor also provides a web service and subscription feed service. They will also consider other ideas for us to acquire the data.
Assuming your system is unix-like, and the other side has an open SSH server, I would add the public key of the user that your application runs under to the authorized_keys file in the remote side. After this, your application would be able to poll for the existence of an updated file by running
ssh username_at_remote_end#ip_address_of_remote stat -C %Z path_to_file
Which will output the seconds after the unix Epoch of the last change to the file (if found) or error (non-zero exit code) if file not found.
To actually retrieve the file (after checking that the time-stamp is within the last 24 hours), I would use
t=$(mktemp -d) && scp username_at_remote_end#ip_address_of_remote:path_to_file $t && echo $t
Which will copy it to a temporary directory under /tmp, readable only by the user that your application is running under, and will return the name of that folder.
All programming languages support running commands locally (in C, using system(); in Java, using a Process; ...). To keep things simple, each command would be added to a script file (say poll.sh and retrieve.sh). If the remote end changes, you only have to update & test the scripts. There are direct interfaces to openssh, but it is probably simpler to outsource all of that work to bash via scripts as seen above.
IF you have similar requirements for more that one case, you can consider using integration server (middleware) to implement this. There you can design a trigger which will invoke that particular pull after 5 ET.
If this is required for only one case, then ask your provider for webservice option. Where you can call his webservice after 5ET once a day by sending a soap request for data and he will return a soap response rather than csv. You can implement it very easily in your system. It will be more secured and efficient. You will have more control on data, transport and security.