title: Running concurrently with HDFS linktitle: Runing with HDFS weight: 1
Ozone is designed to work with HDFS. So it is easy to deploy ozone in an existing HDFS cluster.
The container manager part of Ozone can run inside DataNodes as a pluggable module or as a standalone component. This document describe how can it be started as a HDFS datanode plugin.
To activate ozone you should define the service plugin implementation class.
{{< highlight xml >}} dfs.datanode.plugins org.apache.hadoop.ozone.HddsDatanodeService {{< /highlight >}}
You also need to add the ozone-datanode-plugin jar file to the classpath:
{{< highlight bash >}} export HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar {{< /highlight >}}
To start ozone with HDFS you should start the the following components:
Please check the log of the datanode whether the HDDS/Ozone plugin is started or not. Log of datanode should contain something like this:
2018-09-17 16:19:24 INFO HddsDatanodeService:158 - Started plug-in org.apache.hadoop.ozone.web.OzoneHddsDatanodeService@6f94fb9d