If you type hdfs dfs -ls / you will get list of directories in hdfs. Then you can transfer files from local to hdfs using -copyFromLocal or -put to a particular directory or using -mkdir you can create new directory
Refer below link for more information
answered Apr 3, 2016 at 4:00 BruceWayne BruceWayne 3,334 4 4 gold badges 27 27 silver badges 35 35 bronze badges You mean that I should to try to copy files in popeye directory? Commented Apr 3, 2016 at 10:04I used "hdfs dfs -ls / " command and I was able to copy files to the directory listed by this command.
Commented Apr 3, 2016 at 10:19hdfs dfs -copyFromLocal foo.txt bar.txt
then the local file foo.txt will be copied into your own hdfs directory /user/popeye/bar.txt (where popeye is your username.) As a result, the following achieves the same:
hdfs dfs -copyFromLocal foo.txt /user/popeye/bar.txt
Before copying any file into hdfs, just be certain to create the parent directory first. You don't have to put files in this "home" directory, but (1) better to not clutter "/" with all sorts of files, and (2) following this convention will help prevent conflicts with other users.
answered Jul 17, 2017 at 0:26 9,619 2 2 gold badges 53 53 silver badges 51 51 bronze badgesAs per the first answer, I am elaborating it in detailed for Hadoop 1.x -
Suppose, you are running this script on pseudo distribution model, you will probably get one or two list of users(NameNodes) illustrated -
on our fully distribution model, first you have the administrator rights to perform these things and there will be N number of list of NameNodes(users).
So now we move to our point -
First reach to your Hadoop home directory and from there run this script -
bin/hadoop fs -ls /
Result will like this -
drwxr-xr-x - xuiob78126arif supergroup 0 2017-11-30 11:20 /user
so here xuiob78126arif is my name node(master/user) and the NameNode(user) directory is -
/user/xuiob78126arif/
now you can go to your browser and search the address -
http://xuiob78126arif:50070
and from there you can get the Cluster Summary, NameNode Storage , etc.
Note : the script will provide results only in one condition, if at least any file or directory exist in DataNode otherwise you will get -
ls: Cannot access .: No such file or directory.
so, in that case you first put any file by bin/hadoop fs -put
and there after run the bin/hadoop fs -ls / script.
and now I hope, you have get a bit on your issue, thanks.