how to get/ view metadata from namenode in hadoop? - hdfs

In hadoop HDFS, how to get the metadata stored in namenode. Is it accessible or is hidden for security reasons. I tried getting the FSImage. But I could not view it.
Please explain.

By using below command you can convert namenode fsimage into text and XML file
bash$ hdfs oiv -i path/to/fsimage/file -o destination/file
Commands for XML and Text
bash$ hdfs oiv -i path/to/fsimage/file -o destination/file.xml -p XML
bash$ hdfs oiv -i path/to/fsimage/file -o destination/file.txt -p Indented
for more details please follow this link you can understand clearly:-

Related

Getting an error while copying file from aws EC2 instance to hadoop cluster

I am not able to run any hadoop command even -ls is also not working getting same error and not able to create directory using
hadoop fs -mkdir users.
You can create one directory in HDFS using the command :
$ hdfs dfs -mkdir "path"
and, then use the given below command to copy data from the local file to HDFS:
$ hdfs dfs -put /root/Hadoop/sample.txt /"your_hdfs_dir_path"
Alternatively, you can also use the below command:
$ hdfs dfs -copyFromLocal /root/Hadoop/sample.txt /"your_hdfs_dir_path"

SCP won't find file even though it exists

I'm trying to SCP some files from a remote server. The command I'm using is:
scp -r -i ~/path/to/key ubuntu#address:/home/ubuntu/analysis .
I receive an error:
scp: /home/ubuntu/analysis: No such file or directory.
If I scp another file in my home directory, it does work, e.g.
scp -r -i ~/path/to/key ubuntu#address:/home/ubuntu/.viminfo .
If I create a new file, e.g. with touch new_file.txt, I also cannot download that file.
The permissions and owners for .viminfo and the directory analysis are standard.
Why isn't the SCP working? I have been able to download files from this server before, but something has changed.
Quite confusing - any advice would be appreciated!
Thanks!

How to use sftp in cfncluster?

How can I transfer files using sftp to and from an AWS cluster created using cfncluster.
I have tried
sftp -i path/to/mykey.pem ec2-user#<MASTER.NODE.IP>
which produces
Connection closed
I also tried using Transmit and CyberDuck without any luck.
If you know a way of transfering files to and from cfncluster that does not use sftp please share that too.
You can add a post_intall variable in your config file that will include an extra script to be run after cfncluster deployment
post_intall=https://s3-eu-west-aws-xxxxx/your_script.sh
with your script being like:
#!/bin/bash
sudo sed -i '/Subsystem\ sftp.*$/d' /etc/ssh/sshd_config
sudo sed -i '$iSubsystem sftp internal-sftp' /etc/ssh/sshd_config
sudo service sshd restart
it's quite rough but it works...

AWS CLI command completion with fish shell

Has anybody been able to set up auto-complete for the AWS CLI with fish shell? The AWS documentation only offers the guide for bash, tcsh, and zsh.
Bash exports the variables COMP_LINE and COMP_POINT that is used by the aws_completer script provided by the Amazon. Is there any equivalent for fish? I'm new with the fish shell and I'm giving it a try.
Building upon David Roussel's answers I cooked up the following:
function __fish_complete_aws
env COMP_LINE=(commandline -pc) aws_completer | tr -d ' '
end
complete -c aws -f -a "(__fish_complete_aws)"
Put this in a file $HOME/.config/fish/completions/aws.fish so fish can autoload it when necessary.
aws_completer appends a space after every option it prints and that gets escaped as \ so trimming it solves the trailing backslashes.
Now we can test the completion with the following:
> complete -C'aws co'
codebuild
codecommit
codepipeline
codestar
cognito-identity
cognito-idp
cognito-sync
comprehend
comprehendmedical
connect
configure
configservice
Using the commandline -c helps if you move back the cursor since it cuts the command line at the cursor so aws_completer can offer the right completions.
I also want to get his to work, and I've made some progress, but it's not perfect.
First I look some advise from here which helps to seem how to emulate the bash environment variables that as_completer expects.
Putting it together I get this:
complete -c aws -f -a '(begin; set -lx COMP_SHELL fish; set -lx COMP_LINE (commandline); /usr/local/bin/aws_completer; end)'
That mostly works but I get spurious extra slashes, so if I try to complete "aws ec2 describe-instances --" I get:
dave#retino ~> aws ec2 describe-instances --
--ca-bundle\ --color\ --filters\ --no-dry-run\ --output\ --region\
--cli-connect-timeout\ --debug\ --generate-cli-skeleton --no-paginate\ --page-size\ --starting-token\
--cli-input-json\ --dry-run\ --instance-ids\ --no-sign-request\ --profile\ --version\
--cli-read-timeout\ --endpoint-url\ --max-items\ --no-verify-ssl\ --query\
It looks to me like there is a trailing whitespace char, but I tried to remove it using sed:
complete -c aws -f -a '(begin; set -lx COMP_SHELL fish; set -lx COMP_LINE (commandline); echo (/usr/local/bin/aws_completer | sed -e \'s/[ ]*//\') ; end)'
But this doesn't seem to help. It seems that fish expects a different output format than bash for it's completer. And indeed the fish decimation for the complete builtin doe say that it expects a space separated list.
So I tried joining the lines with xargs:
complete -c aws -f -a '(begin; set -lx COMP_SHELL fish; set -lx COMP_LINE (commandline); echo (/usr/local/bin/aws_completer | sed -e \'s/[ ]*//\') | xargs echo ; end)'
But this doesn't work either. I just get one completion
This is annoying, I'm so close, but it doesn't work!
While the provided answer doesn't answer the question directly about the using fish; I intend to provide an answer to help in the context of auto-completion & shell.
Amazon has launched a new CLI based tool forked from AWSCLI.
aws-shell is a command-line shell program that provides convenience
and productivity features to help both new and advanced users of the
AWS Command Line Interface. Key features include the following.
Fuzzy auto-completion
Commands (e.g. ec2, describe-instances, sms, create-queue)
Options (e.g. --instance-ids, --queue-url)
Resource identifiers (e.g. Amazon EC2 instance IDs, Amazon SQS queue URLs, Amazon SNS topic names)
Dynamic in-line documentation
Documentation for commands and options are displayed as you type
Execution of OS shell commands
Use common OS commands such as cat, ls, and cp and pipe inputs and outputs without leaving the shell
Export executed commands to a text editor To find out more, check out the related blog post on AWS Command Line Interface blog.
Add this line to your .config/fish/config.fish
complete --command aws --no-files --arguments '(begin; set --local --export COMP_SHELL fish; set --local --export COMP_LINE (commandline); aws_completer | sed \'s/ $//\'; end)'
In case you want to make sure that aws-cli is installed:
test -x (which aws_completer); and complete --command aws --no-files --arguments '(begin; set --local --export COMP_SHELL fish; set --local --export COMP_LINE (commandline); aws_completer | sed \'s/ $//\'; end)'
All credits belong to this issue thread and a comment by an awesome SO contributor #scooter-dangle.
It's actually possible to map bash's completion to fish's.
See the npm completions.
However it's probably still better to write a real fish script (it's not hard!).
The command I use in my virtualenv/bin/activate is this:
complete -C aws_completer aws
Looks like aws-cli has fish support too. There is a bundled installer provided with aws-cli that might be worth checking out: activate.fish. I found it in the same bin directory as the aws command.
For example:
ubuntu#ip-xxx-xx-x-xx:/data/src$ tail -n1 ~/venv/bin/activate
complete -C aws_completer aws
ubuntu#ip-xxx-xx-x-xx:/data/src$ source ~/venv/bin/activate
(venv) ubuntu#ip-xxx-xx-x-xx:/data/src$ aws s3 <- hitting TAB here
cp ls mb mv presign rb rm sync website
(venv) ubuntu#ip-xxx-xx-x-xx:/data/src$ aws s3

ec2-register Client.null: null

I am trying to resgister an amazon image, and I keep getting the error Client.null: null.
I am able to browse to the URL and see the xml file.
The command I execute is:
ec2-register output.raw.manifest.xml -U <URL>
Client.null: null
any idea what could be the problem?
Thanks!
Keep in mind that this command is used to register instance store images rather than EBS back images.
Usually the xml file with a series of 10GB files are uploaded to S3 prior to registering the AMI. Are you sure the bundle is in one of your S3 buckets?
Did you run something like this from the instance you want to create the image from?:
ec2-bundle-vol -d /<someplace-where-you-have-a-lot-of-space> -k YOUR_PRIVATE_KEY -c YOUR_CERTIFICATE -u YOUR_ACCOUNT_NUMBER
ec2-upload-bundle -b YOUR_BUCKET_NAME -m output.raw.manifest.xml -a YOUR_ACCESS_KEY -s YOUR_SECRET_KEY
Then you can run:
ec2-register output.raw.manifest.xml
You can also register your image from the AWS console once you have created the bundle like shown here:
There are several blogs that talk about how to do this too. For example:
http://www.ryannitz.org/tech-notes/2009/08/09/create-amazon-ec2-ami/
Finally, if you are registering and EBS backed AMI you can just simply use:
ec2-create-image <instance id>