I'm pretty perplexed... I've got 5 different test computers, all relatively blank Windows XP machines running similar hardware specs. I run a silent install of the FireBird (Classic) database and my application. Some computers require "localhost:" (or 127.0.0.1) before the database location to make a connection, and some simply don't work at all! This is running the exact same software across the board. Does anybody have any suggestions as to what needs to happen to make the connection string universal, or what I could be doing wrong??
It's firebird version 2.1.1.17910 Classic
By the way, i tried connecting to the same database using FlameRobin (a small db management tool) and it worked just fine on the computers that don't connect.
Any more information necessary just let me know! Thanks a lot in advance
For anybody's future reference, the answer is in the services. Apparently it's not being registered as a service for some reason, and on the working computers, was at some point registered, probably through some sort of far earlier tests of Interbase is my best guess.
C:\Windows\System32\drivers\etc and opening up the file 'services' and adding the following line allows the server to run properly.
gds_db 3050/tcp
I'm not sure whether you are aware of that, but a connection string without "localhost:" or "127.0.0.1:" in front of the database name or alias will use the local protocol, which can't be used when connecting to Firebird Classic Server (see this link for more information). If a host name or IP address is given, then TCP port 3050 will be used for the connection.
If you have registered a server in FlameRobin, and did not leave the hostname field in the registration dialog blank, then the host name will be part of the connection string. That would explain why you can connect using FlameRobin.
As for the differences between the machines: You should first go to the Firebird Server Manager applet and make sure that the server is indeed running on all machines, and that the version is the same.
Does it have something to do with the hosts file on some of the computers? Or is that what you're referring to with your
Some computers require "localhost:" (or 127.0.0.1) before the database location...
comment?
Related
So today I was in my MongoDB and I type in show dbs. Other than my usual dbs there is an additional hacked_by_unistellar. Anyone might know what I can do here? It sounds like I have been hacked unless this is some terrible easter egg I have come across. Please advise. Thank you.
you should close your default mongoDB Port 27017. Got the same problem
I had the same on an old backup server as well.
All I can say is that it is not related to an open, public mongodb port. The mongo server is running on localhost only, but has no access password (under FreeBSD 12).
Obviously, running with a public default port and no password is just what it is, but that's not the answer.
The only ports open on the server is SSH, 80/443 (running Apache 2.4.x) and a node service at port 3xxx, along with Mongo Express (also password protected).
There is also a MySQL server installed with no password, bound to localhost only, but that remained untouched.
It seems more likely that this is a vulnerability somewhere else, that is exploiting a non-protected local connection to mongodb.
Password protecting mongo might protect the database, but does not identify the point of access, which is worrisome.
All of my data is gone!
Well, my only action now is to close any more open connections to my DB instance. My database required a password to access (so, being passwordless was not the issue).
However, I just added a Basic Firewall to bump up the security a bit, at least, now I can assume no remote access can connect directly to my DB instance.
I followed this thread
Jump to Step Seven — Set Up a Basic Firewall part of the post.
https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-16-04
Also, you can allow only some IP addresses to your DB instances. By following the instructions at https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw/#advanced-rules
I use this personally on my main instance where I trust connections would come only from one IP.
Hope this helps someone temporarily till a better fix emerges.
Are your MongoDB password protected? if so, you can access the Database with only an IP address and the port.
If your MongoDB isn`t password protected, please do it asap! your info is exposed to everyone...
Even big companies do this mistake from time to time as well...
Ok, I have a quick question to ask all the veteran Wamp users on this board.
At work, we are currently working on a web application. We are trying to use Wamp to design everything, but we have a problem. All the computers right now have wamp installed to the default location (C:/wamp).
Our problem is, we all want to have access to the same MySQL database so we can edit it at the same time. Right now, only one person can edit it at a time to prevent losing the work of someone else.
When done, we just dump the mySQL folder onto a network drive so whoever wants to edit it next can take it and use it.
This isn't very time efficient, so we're wondering if its possible to install Wamp directly to a network drive in some way. We tried doing it just now but we can't get Wamp to start services.
So any type of advice will be helpful
i think these two thing would help you :
1: install wamp in only one system then in apache configuration file listen to his lan ip in order to others can access it in this way you have just one database server
2: as you've installed wamp to all systems choose one system's database as main and in mysql configuration define a new server wich server's ip is that system's lan ip
then users instead of using localhost for connecting to mysql will use that ip
I've got some code that runs in Enterprise guide (SAS Enterprise build, Windows locally, Unix server), which imports a large table via a local install of PC File server. It runs fine for me, but is slow to the point of uselessness for the system tester.
When I use his SAS identity on my windows PC, the code works; but when I use my SAS identity on his machine it doesn't, so it appears to be a problem with the local machine. We have the same version of EG (same hot fixes installed) connecting to the same server (with the same roles) running the same code in the same project, connecting to the same Access database.
Even a suggestion of what to test next would be greatly appreciated!
libname ACCESS_DB pcfiles path="&db_path"
server=&_CLIENTMACHINE
port=9621;
data permanent.&output_table (keep=[lots of vars]);
format [lots of vars];
length [lots of vars];
set ACCESS_DB.&source_table (rename=([some awkward vars]));
if [var]=[value];
[build some new vars, nothing scary];
;
run;
Addenda The PC files server is running on the same machine where the EG project is being run in both case - we both have the same version installed. &db_path is the location of the Access database - on a network file store both users can access (in fact other, smaller tables can be retrieved by both users in a sensible amount of time). This server is administered by IT and not a server we as the business can get software installed on.
The resolution of your problem will require more details and best solved by dialog with SAS Tech Support. The "online ticket" form is here or you can call them by phone.
For example, is the PCFILES server running locally on both your machine and your tester's machine? If yes, is the file referenced by &db_path on a network file server and does your tester have similar access (meaning both of you can reach it the same way)? Have you considered installing the PCFILE server on your file server rather than on your local PC? Too many questions, I think, for a forum like this. But I could be wrong (its happened before); perhaps others will have a great answer.
We have a multi-threaded process which makes multiple calls to multiple target machines from a source machine using NetApi’s eg. NetServerGetInfo, LSAOpenPolicy, NetShareEnum, NetWKstaGetInfo, NetWKstaUserEnum etc… We make quite significant number of calls and have observed that over a period of time these calls fail. For example NetServerGetInfo starts returning error 53 after a while. This issue persist until we restart Workstation service or the machine. Accessing the target shares directly also does not work after such error is returned by our process.
The source machine from where we are making calls is a Win 2k8 R2 and the target machines are 2k3 servers.
We are suspecting some kind of issue with NetApi calls or some kind of handle leak.
Has anyone faced similar issues while using these APIs and managed to figure out a solution?
I found few references online for similar issues:
http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2networking/thread/9f93508c-71fa-4807-b41a-8f558563afe3/
Snippet from above link:
Experiencing the exact same issue as stated about except we have 2 Windows Server 2008 R2's acting as Terminal Servers connecting to Server 2003 Shares. Rebooting the terminal servers seems to resolve the problem for about 2-4 days and then re-appears. The XP/Vista/Win7 workstations on the network has no problem accessing the shares on the 2003 Server, only the 2008 R2 servers.
Connecting the the 2003 Shares using the FQDN or IP address works, but using \servername returns network path not found. Setting up WINS on the network did not resolve this, or adding a static entry in the hosts file to the server.
There is no firewall software installed on the servers and we don't use Symantec products on the network (No Symantec Endpoint security).
Viewing of the eventlog also turned up the Event ID: 1006, could not validate DNS server, even though name resolution appears to be functioning without a problem.
http://support.microsoft.com/kb/816621
http://technet.microsoft.com/en-us/library/dd296694%28WS.10%29.aspx
https://serverfault.com/questions/205043/windows-share-the-specified-network-name-is-no-longer-available
I'm running a client/server application locally on my Windows XP PC and for testing purposes I want to run multiple clients.
The server has a configuration file containing the IP addresses of the clients that can connect; in the real world, these would all be on separate hosts with separate IP addresses.
Currently I am able to test locally with a single client which binds to 127.0.0.1 however because I can only have one client-IP mapping in the server configuration (that's how the system works and can't be redesigned!) I can only run one client on my development PC.
I've tried to start another client application bound to 127.0.0.2 connecting to the server which is bound to 0.0.0.0 however the server thinks that the client is connecting from 127.0.0.1 again and so rejects what it believes is a second connection from the first client.
Can anyone suggest a way to get around this problem? I believe I could run one more client bound to the external IP address of the PC but I'd really like to be able to run multiple.
I know I could use VirtualBox or similar to run new instances but I'd like all of the client applications to be running in the Visual Studio debugger.
Any help greatly appreciated!
Nick.
PS. Not sure if it matters but the applications are written in C++ using standard winsock sockets.
You might be able to create more loopback interfaces. See the chosen answer to How do you create a virtual network interface on Windows?
AFAIK Windows 7 (maybe Vista too) lets you add multiple IP addresses to a single interface (card).