From b5605bc41645863239bd17eb6dea59e49d4a57e5 Mon Sep 17 00:00:00 2001 From: Benedikt Elser <benedikt.elser@th-deg.de> Date: Thu, 17 Jun 2021 11:29:17 +0200 Subject: [PATCH] Doku --- README.md | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index d827f0a..6eea9ec 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,7 @@ forked from https://github.com/rancavil/hadoop-single-node-cluster Following this steps you can build and use the image to create a Hadoop Single Node Cluster containers. ## Creating the hadoop image +If you want to recreate the container run $ git clone https://mygit.th-deg.de/systemdesign/hadoop-single-node-cluster.git $ cd hadoop-single-node-cluster @@ -14,12 +15,20 @@ Following this steps you can build and use the image to create a Hadoop Single N To run and create a container execute the next command: + $ docker run --name <container-name> -p 9864:9864 -p 9870:9870 -p 8088:8088 --hostname <your-hostname> registry.mygit.th-deg.de/systemdesign/hadoop-single-node-cluster + +Change **container-name** by your favorite name and set **your-hostname** with by your ip or name machine. You can use **localhost** as your-hostname. If you built the container yourself use + $ docker run -it --name <container-name> -p 9864:9864 -p 9870:9870 -p 8088:8088 --hostname <your-hostname> hadoop -Change **container-name** by your favorite name and set **your-hostname** with by your ip or name machine. You can use **localhost** as your-hostname +Or using the appropriate image name you used in the `docker build` step. When you run the container, at the entrypoint you use the docker-entrypoint.sh shell that creates and starts the hadoop environment. +To log in the running container use + + $ docker exec -it <container-name> bash + You should get the following prompt: hduser@localhost:~$ @@ -28,12 +37,7 @@ To check if hadoop container is working go to the url in your browser. http://localhost:9870 -**Notice:** the hdfs-site.xml configure has the property, so don't use it in a production environment. - - <property> - <name>dfs.permissions</name> - <value>false</value> - </property> +**Notice:** the hdfs-site.xml uses no permissions, hence is insecure. So don't use it in a production environment. ## A first example @@ -82,15 +86,11 @@ Checking the result using **cat** command on the distributed filesystem: To stop the container execute the following commands, to gratefully shutdown. - hduser@localhost:~$ stop-dfs.sh - hduser@localhost:~$ stop-yarn.sh - -After that. + $ docker stop <container-name> - hduser@localhost:~$ exit +Or press CTRL-C if your container is not running in the backgrund To re-start the container, and go back to our Hadoop environment execute: $ docker start -i <container-name> - -- GitLab