How can I use a doctrine connection to import a SQL file? - doctrine-orm

I have an app that needs to import a .sql file. I can import the file from the command line with mysql -u my_user -pMyPassword db_name < import.sql, but I'd like to move this into my app. I have some things that need to be done before the import and others after. Right now I have to break it into 3 steps. The closest to a solution I've found was to get the connection (Doctrine\DBAL\Connection) and use exec() but it throws syntax errors even though my source file is correct. I'm guessing it's trying to escape things and double escaping the SQL. The file was generated with mysqldump.

With Symfony using Doctrine, you can do it with:
php app/dev_console doctrine:database:import import.sql

You can use the DBAL "import" command and get the sql executed. This is anyway less performant than using the mysql command directly, since it loads the entire file into memory.
Otherwise, I'd suggest you to use your own Symfony Console Command.

in my case it was :
php bin/console doctrine:database:import my_sql_file.sql

Status September 2021:
I rather trust the code in Doctrine\Bundle\DoctrineBundle\Command\Proxy\ImportDoctrineCommand which calls a deprecation-warning in the execute function. It is not good programming practice to ignore deprecation warnings. Calling dual:run-sql would not be efficient enough because of the overhead.
Alternatively, you can also call e.g. mysql on the operating system level. This causes problems in multi-user environments, for example because the database password has to be specified on the command line. In addition, the server must also activate this function; exec() is switched off for security reasons in many environments, especially with low-cost providers. Furthermore, this function would not be database abstract. This abstraction is one of the most outstanding features of Doctrine.
Therefore, I recommend reading in the SQL data yourself and executing it line by line (see entityManager->createNativeSQL() function). You then have better possibilities to react to possible errors.

Related

Preventing Embedded Python from Running Shell Commands?

I want to embed Python 3.x in our C++ application to allow scripting of maintenance and other tasks. It so far does everything we need including file manipulation.
The problem is that to meet some specifications (like PCI), we aren't allowed to arbitrarily run shell commands such as with subprocess.call or popen.
Is there a way to prevent and similar calls from working in embedded Python?
One option is to remove all the modules that allow running arbitrary shell commands, i.e.: subprocess.py*, os.py*... and include only the modules that the end users are allowed to have immediate access to.
Unless your application is really locked down I don't think you can prevent someone from loading their own python module from an arbitrary directory (with import) so I don't think you can prevent execution of arbitrary code as long as you have python embedded.

How to set the sqlite parameters using C++

I need to set some parameters in sqlite like turning the headers on: (.headers ON), setting the output mode to csv : (.mode csv) and I require this to be done with C++ instead of the sqlite command line tool.
Can I know whether it is possible or not, and if possible, how to achieve this (using example)?
Thanks
The dot commands are conveniences of the sqlite command line tool. They are not available using the API. CSV is quite easy to build yourself, though.

Why sqlite3 can't work with NFS?

I switch to using sqlite3 instead of MySQL because I had to run many jobs on a PBS system which doesn't not have mysql. Of course on my machine I do not have a NFS while there exists one on the PBS. After spending lots of time switching to sqlite3, I go to run many jobs and I corrupt my database.
Of course down in the sqlite3 FAQ it is mentioned about NFS, but I didn't even think about this when I started.
I can copy the database at the beginning of the job but it will turn into a merging nightmare!
I would never recommend sqlite to any of my colleagues for this simple reason: "sqlite doesn't work (on the machines that matter)"
I have read rants about NFS not being up to par and it being their fault.
I have tried a few workarounds, but as this post suggests, it is not possible.
Isn't there a workaround which sacrifices performance?
So what do I do? Try some other db software? Which one?
You are using the wrong tool. Saying "I would never recommend sqlite ..." based on this experience is a bit like saying "I would never recommend glass bottles" after they keep breaking when you use them to hammer in a nail.
You need to specify your problem more precisely. My attempt to read between the lines of your question gives me something like this:
You have many nodes that get work through some unspecified path, and produce output. The jobs do not interact because you say you can copy the database. The output from all the jobs can be merged after they are finished. How do you effectively produce the merged output?
Given that as the question, this is my advice:
Have each job produce its output in a structured file, unique to each job. After the jobs are finished, write a program to parse each file and insert it into an sqlite3 database. This uses NFS in a way it can handle (single process writing sequentially to a file) and uses sqlite3 in a way that is also sensible (single process writing to a database on a local filesystem). This avoid NFS locking issues while running the job, and should improve throughput because you don't have contention on the sqlite3 database.

Using Fabric to search for and run a file

I'm working on a website using Django and I have Fabric as well, which is very useful for scripting some chunks of code that me and other developers use. I'm pretty new to all of these (and linux in general, tbh) so I have ideas, but I don't know how (or if) they are possible. Specifically, I wanted to write a script to start the server on a specific port that we use for testing. Manually, I would just run
python ~/project/manage.py runserver 0.0.0.0:8080
but that gets old. To manually implement that specific command, I have the following code in my fabfile:
def start8080():
local("python ~/project/manage.py runserver 0.0.0.0:8080")
which works, but I'm not the only one using the port for testing, and ~/project/ is not the only project which would need to use a similar script. Is there a way to search down the tree from the directory you are working in for the first manage.py and then to run it the same command from there?
Farbic functions allow you to use arguments:
#task #not bad to use once your fabfile is big by using helper functions
def runserver(project_path, port=8000):
run("python %s/manage.py runserver 0.0.0.0:%s" % (project_path,port))
and you would use it like this:
fab runserver:/home/project,8080
you could also simplify it by creating a task that selects a port per project, although all available projects and their paths would have to be defined there. Then it could be as easy as:
fab runserver:myprojectname
Of course you could additionally implement #morgan's answer making the script check if the port is open and automatically assign a free one.
You could use the socket module, as shown here and have the os figure out your port, and then fabric just let you know which it chose.

How can I login linux using C or C++

I need to programmely switch the current user to another,then the followed code should be executed in the environment(such as path,authority..) of another user.
I've find the 'chroot()','setuid()' may be associated with my case, but these functions need the root authority, I don't have root authority, but I have the password of the second user. what should I do?
I have tried shell "su - " can switch current user, can this command help me in my C++ code?
Don't laugh at me if my question is very stupid, I'm a true freshman on linux. :)
Thanks!
when clients connect to the server,
the server transfer the data what they
need,but the precondition is the
correct username and password.
If your primary requirement is to authenticate, then try man pam. There are also some libraries allowing to auth over LDAP. Unfortunately I have no personal experience implementing neither.
Otherwise, recreating complete user environment is unreliable and error prone. Imaging a typo or endless loop but in user's ~/.profile.
I haven't done that for some time, but I would also have tried to dig in direction of "su", figuring out user shell (from /etc/passwd) and trying to exec() it as if it was a login shell (with "-"). But after that you would need somehow to communicate a command for execution to it and that's a problem: shells run differently in batch more and in interactive mode. As a possible hack, expect (man expect) comes to mind, but it is still IMO too unreliable.
I have in past used ssh under expect (to input the password), but it was breaking on customized user profiles every other time. With expect, to send a command, one has to recognize somehow that shell has finished initialization (execution of profile and rc files). But since many people customize the shell prompt and their profile/rc files print extra info, it was quite often that expect was recognizing shell prompt too soon.
BTW, depending on number of users, one can try a setup where users manually start the server under their own account. The server would have access only to the information which is only accessible to the user.
You can use the system function to execute shell commands on the operating system.
You could take a look at the source code of the login command, or you could try using the exec()-family functions to call on login.
EDIT: Seems like you will need root access in any case.
Is setuid what you're looking for?
I think the key point here is that you can't change the user of the running process (easily). All the programs like 'su' are effectively starting a new process as the specified user.
Therefore, in your design I would recommend seperating off the functionality that needs to be done into a different executable and then investigate using execve() to start it.