Bladeren bron

HADOOP-16411. Fix javadoc warnings in hadoop-dynamometer.

Signed-off-by: Masatake Iwasaki <iwasakims@apache.org>
Masatake Iwasaki 5 jaren geleden
bovenliggende
commit
738c09349e

+ 10 - 6
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/ApplicationMaster.java

@@ -90,19 +90,19 @@ import org.slf4j.LoggerFactory;
  * part of this YARN application. This does not implement any retry/failure
  * part of this YARN application. This does not implement any retry/failure
  * handling.
  * handling.
  * TODO: Add proper retry/failure handling
  * TODO: Add proper retry/failure handling
- *
- * <p/>The AM will persist until it has run for a period of time equal to the
+ * <p>
+ * The AM will persist until it has run for a period of time equal to the
  * timeout specified or until the application is killed.
  * timeout specified or until the application is killed.
- *
- * <p/>If the NameNode is launched internally, it will upload some information
+ * <p>
+ * If the NameNode is launched internally, it will upload some information
  * onto the remote HDFS instance (i.e., the default FileSystem) about its
  * onto the remote HDFS instance (i.e., the default FileSystem) about its
  * hostname and ports. This is in the location determined by the
  * hostname and ports. This is in the location determined by the
  * {@link DynoConstants#DYNAMOMETER_STORAGE_DIR} and
  * {@link DynoConstants#DYNAMOMETER_STORAGE_DIR} and
  * {@link DynoConstants#NN_INFO_FILE_NAME} constants and is in the
  * {@link DynoConstants#NN_INFO_FILE_NAME} constants and is in the
  * {@link Properties} file format. This is consumed by this AM as well as the
  * {@link Properties} file format. This is consumed by this AM as well as the
  * {@link Client} to determine how to contact the NameNode.
  * {@link Client} to determine how to contact the NameNode.
- *
- * <p/>Information about the location of the DataNodes is logged by the AM.
+ * <p>
+ * Information about the location of the DataNodes is logged by the AM.
  */
  */
 @InterfaceAudience.Public
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 @InterfaceStability.Unstable
@@ -204,6 +204,7 @@ public class ApplicationMaster {
    *
    *
    * @param args Command line args
    * @param args Command line args
    * @return Whether init successful and run should be invoked
    * @return Whether init successful and run should be invoked
+   * @throws ParseException on error while parsing options
    */
    */
   public boolean init(String[] args) throws ParseException {
   public boolean init(String[] args) throws ParseException {
 
 
@@ -267,6 +268,9 @@ public class ApplicationMaster {
    *
    *
    * @return True if the application completed successfully; false if if exited
    * @return True if the application completed successfully; false if if exited
    *         unexpectedly, failed, was killed, etc.
    *         unexpectedly, failed, was killed, etc.
+   * @throws YarnException for issues while contacting YARN daemons
+   * @throws IOException for other issues
+   * @throws InterruptedException when the thread is interrupted
    */
    */
   public boolean run() throws YarnException, IOException, InterruptedException {
   public boolean run() throws YarnException, IOException, InterruptedException {
     LOG.info("Starting ApplicationMaster");
     LOG.info("Starting ApplicationMaster");

+ 14 - 10
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/Client.java

@@ -108,24 +108,24 @@ import org.slf4j.LoggerFactory;
  * for them to be accessed by the YARN app, then launches an
  * for them to be accessed by the YARN app, then launches an
  * {@link ApplicationMaster}, which is responsible for managing the lifetime of
  * {@link ApplicationMaster}, which is responsible for managing the lifetime of
  * the application.
  * the application.
- *
- * <p/>The Dynamometer YARN application starts up the DataNodes of an HDFS
+ * <p>
+ * The Dynamometer YARN application starts up the DataNodes of an HDFS
  * cluster. If the namenode_servicerpc_addr option is specified, it should point
  * cluster. If the namenode_servicerpc_addr option is specified, it should point
  * to the service RPC address of an existing namenode, which the datanodes will
  * to the service RPC address of an existing namenode, which the datanodes will
  * talk to. Else, a namenode will be launched internal to this YARN application.
  * talk to. Else, a namenode will be launched internal to this YARN application.
  * The ApplicationMaster's logs contain links to the NN / DN containers to be
  * The ApplicationMaster's logs contain links to the NN / DN containers to be
  * able to access their logs. Some of this information is also printed by the
  * able to access their logs. Some of this information is also printed by the
  * client.
  * client.
- *
- * <p/>The application will store files in the submitting user's home directory
+ * <p>
+ * The application will store files in the submitting user's home directory
  * under a `.dynamometer/applicationID/` folder. This is mostly for uses
  * under a `.dynamometer/applicationID/` folder. This is mostly for uses
  * internal to the application, but if the NameNode is launched through YARN,
  * internal to the application, but if the NameNode is launched through YARN,
  * the NameNode's metrics will also be uploaded to a file `namenode_metrics`
  * the NameNode's metrics will also be uploaded to a file `namenode_metrics`
  * within this folder. This file is also accessible as part of the NameNode's
  * within this folder. This file is also accessible as part of the NameNode's
  * logs, but this centralized location is easier to access for subsequent
  * logs, but this centralized location is easier to access for subsequent
  * parsing.
  * parsing.
- *
- * <p/>If the NameNode is launched internally, this Client will monitor the
+ * <p>
+ * If the NameNode is launched internally, this Client will monitor the
  * status of the NameNode, printing information about its availability as the
  * status of the NameNode, printing information about its availability as the
  * DataNodes register (e.g., outstanding under replicated blocks as block
  * DataNodes register (e.g., outstanding under replicated blocks as block
  * reports arrive). If this is configured to launch the workload job, once the
  * reports arrive). If this is configured to launch the workload job, once the
@@ -134,8 +134,8 @@ import org.slf4j.LoggerFactory;
  * NameNode. Once the workload job completes, the infrastructure application
  * NameNode. Once the workload job completes, the infrastructure application
  * will be shut down. At this time only the audit log replay
  * will be shut down. At this time only the audit log replay
  * ({@link AuditReplayMapper}) workload is supported.
  * ({@link AuditReplayMapper}) workload is supported.
- *
- * <p/>If there is no workload job configured, this application will, by
+ * <p>
+ * If there is no workload job configured, this application will, by
  * default, persist indefinitely until killed by YARN. You can specify the
  * default, persist indefinitely until killed by YARN. You can specify the
  * timeout option to have it exit automatically after some time. This timeout
  * timeout option to have it exit automatically after some time. This timeout
  * will enforced if there is a workload job configured as well.
  * will enforced if there is a workload job configured as well.
@@ -248,8 +248,8 @@ public class Client extends Configured implements Tool {
   private Options opts;
   private Options opts;
 
 
   /**
   /**
-   * @param args
-   *          Command line arguments
+   * @param args Command line arguments
+   * @throws Exception on error
    */
    */
   public static void main(String[] args) throws Exception {
   public static void main(String[] args) throws Exception {
     Client client = new Client(
     Client client = new Client(
@@ -386,6 +386,8 @@ public class Client extends Configured implements Tool {
    *
    *
    * @param args Parsed command line options
    * @param args Parsed command line options
    * @return Whether the init was successful to run the client
    * @return Whether the init was successful to run the client
+   * @throws ParseException on error while parsing
+   * @throws IOException for other errors
    */
    */
   public boolean init(String[] args) throws ParseException, IOException {
   public boolean init(String[] args) throws ParseException, IOException {
 
 
@@ -506,6 +508,8 @@ public class Client extends Configured implements Tool {
    * Main run function for the client.
    * Main run function for the client.
    *
    *
    * @return true if application completed successfully
    * @return true if application completed successfully
+   * @throws IOException for general issues
+   * @throws YarnException for issues while contacting YARN daemons
    */
    */
   public boolean run() throws IOException, YarnException {
   public boolean run() throws IOException, YarnException {
 
 

+ 4 - 0
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/java/org/apache/hadoop/tools/dynamometer/DynoInfraUtils.java

@@ -116,9 +116,13 @@ public final class DynoInfraUtils {
    * (checked in that order) is set, use that as the mirror; else use
    * (checked in that order) is set, use that as the mirror; else use
    * {@value APACHE_DOWNLOAD_MIRROR_DEFAULT}.
    * {@value APACHE_DOWNLOAD_MIRROR_DEFAULT}.
    *
    *
+   * @param destinationDir destination directory to save a tarball
    * @param version The version of Hadoop to download, like "2.7.4"
    * @param version The version of Hadoop to download, like "2.7.4"
    *                or "3.0.0-beta1"
    *                or "3.0.0-beta1"
+   * @param conf configuration
+   * @param log logger instance
    * @return The path to the tarball.
    * @return The path to the tarball.
+   * @throws IOException on failure
    */
    */
   public static File fetchHadoopTarball(File destinationDir, String version,
   public static File fetchHadoopTarball(File destinationDir, String version,
       Configuration conf, Logger log) throws IOException {
       Configuration conf, Logger log) throws IOException {

+ 2 - 2
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/CreateFileMapper.java

@@ -32,8 +32,8 @@ import org.apache.hadoop.mapreduce.Mapper;
 /**
 /**
  * CreateFileMapper continuously creates 1 byte files for the specified duration
  * CreateFileMapper continuously creates 1 byte files for the specified duration
  * to increase the number of file objects on the NN.
  * to increase the number of file objects on the NN.
- *
- * <p/>Configuration options available:
+ * <p>
+ * Configuration options available:
  * <ul>
  * <ul>
  *   <li>{@value NUM_MAPPERS_KEY} (required): Number of mappers to launch.</li>
  *   <li>{@value NUM_MAPPERS_KEY} (required): Number of mappers to launch.</li>
  *   <li>{@value DURATION_MIN_KEY} (required): Number of minutes to induce
  *   <li>{@value DURATION_MIN_KEY} (required): Number of minutes to induce

+ 6 - 0
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/WorkloadMapper.java

@@ -34,6 +34,8 @@ public abstract class WorkloadMapper<KEYIN, VALUEIN>
 
 
   /**
   /**
    * Return the input class to be used by this mapper.
    * Return the input class to be used by this mapper.
+   * @param conf configuration.
+   * @return the {@link InputFormat} implementation for the mapper.
    */
    */
   public Class<? extends InputFormat> getInputFormat(Configuration conf) {
   public Class<? extends InputFormat> getInputFormat(Configuration conf) {
     return VirtualInputFormat.class;
     return VirtualInputFormat.class;
@@ -41,18 +43,22 @@ public abstract class WorkloadMapper<KEYIN, VALUEIN>
 
 
   /**
   /**
    * Get the description of the behavior of this mapper.
    * Get the description of the behavior of this mapper.
+   * @return description string.
    */
    */
   public abstract String getDescription();
   public abstract String getDescription();
 
 
   /**
   /**
    * Get a list of the description of each configuration that this mapper
    * Get a list of the description of each configuration that this mapper
    * accepts.
    * accepts.
+   * @return list of the description of each configuration.
    */
    */
   public abstract List<String> getConfigDescriptions();
   public abstract List<String> getConfigDescriptions();
 
 
   /**
   /**
    * Verify that the provided configuration contains all configurations required
    * Verify that the provided configuration contains all configurations required
    * by this mapper.
    * by this mapper.
+   * @param conf configuration.
+   * @return whether or not all configurations required are provided.
    */
    */
   public abstract boolean verifyConfigurations(Configuration conf);
   public abstract boolean verifyConfigurations(Configuration conf);
 
 

+ 2 - 0
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditCommandParser.java

@@ -35,6 +35,7 @@ public interface AuditCommandParser {
    * called prior to any calls to {@link #parse(Text, Function)}.
    * called prior to any calls to {@link #parse(Text, Function)}.
    *
    *
    * @param conf The Configuration to be used to set up this parser.
    * @param conf The Configuration to be used to set up this parser.
+   * @throws IOException if error on initializing a parser.
    */
    */
   void initialize(Configuration conf) throws IOException;
   void initialize(Configuration conf) throws IOException;
 
 
@@ -50,6 +51,7 @@ public interface AuditCommandParser {
    *                           (in milliseconds) to absolute timestamps
    *                           (in milliseconds) to absolute timestamps
    *                           (in milliseconds).
    *                           (in milliseconds).
    * @return A command representing the input line.
    * @return A command representing the input line.
+   * @throws IOException if error on parsing.
    */
    */
   AuditReplayCommand parse(Text inputLine,
   AuditReplayCommand parse(Text inputLine,
       Function<Long, Long> relativeToAbsolute) throws IOException;
       Function<Long, Long> relativeToAbsolute) throws IOException;

+ 2 - 2
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditLogDirectParser.java

@@ -36,8 +36,8 @@ import org.apache.hadoop.io.Text;
  * It requires setting the {@value AUDIT_START_TIMESTAMP_KEY} configuration to
  * It requires setting the {@value AUDIT_START_TIMESTAMP_KEY} configuration to
  * specify what the start time of the audit log was to determine when events
  * specify what the start time of the audit log was to determine when events
  * occurred relative to this start time.
  * occurred relative to this start time.
- *
- * <p/>By default, this assumes that the audit log is in the default log format
+ * <p>
+ * By default, this assumes that the audit log is in the default log format
  * set up by Hadoop, like:
  * set up by Hadoop, like:
  * <pre>{@code
  * <pre>{@code
  *   1970-01-01 00:00:00,000 INFO FSNamesystem.audit: allowed=true ...
  *   1970-01-01 00:00:00,000 INFO FSNamesystem.audit: allowed=true ...

+ 3 - 1
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditLogHiveTableParser.java

@@ -40,7 +40,9 @@ import org.apache.hadoop.io.Text;
  *   INSERT OVERWRITE DIRECTORY '${outputPath}'
  *   INSERT OVERWRITE DIRECTORY '${outputPath}'
  *   SELECT (timestamp - ${startTime} AS relTime, ugi, cmd, src, dst, ip
  *   SELECT (timestamp - ${startTime} AS relTime, ugi, cmd, src, dst, ip
  *   FROM '${auditLogTableLocation}'
  *   FROM '${auditLogTableLocation}'
- *   WHERE timestamp >= ${startTime} AND timestamp < ${endTime}
+ *   WHERE
+ *     timestamp {@literal >=} ${startTime}
+ *     AND timestamp {@literal <} ${endTime}
  *   DISTRIBUTE BY src
  *   DISTRIBUTE BY src
  *   SORT BY relTime ASC;
  *   SORT BY relTime ASC;
  * </pre>
  * </pre>

+ 4 - 4
hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/src/main/java/org/apache/hadoop/tools/dynamometer/workloadgenerator/audit/AuditReplayMapper.java

@@ -57,16 +57,16 @@ import static org.apache.hadoop.tools.dynamometer.workloadgenerator.audit.AuditR
  * format of these files is determined by the value of the
  * format of these files is determined by the value of the
  * {@value COMMAND_PARSER_KEY} configuration, which defaults to
  * {@value COMMAND_PARSER_KEY} configuration, which defaults to
  * {@link AuditLogDirectParser}.
  * {@link AuditLogDirectParser}.
- *
- * <p/>This generates a number of {@link org.apache.hadoop.mapreduce.Counter}
+ * <p>
+ * This generates a number of {@link org.apache.hadoop.mapreduce.Counter}
  * values which can be used to get information into the replay, including the
  * values which can be used to get information into the replay, including the
  * number of commands replayed, how many of them were "invalid" (threw an
  * number of commands replayed, how many of them were "invalid" (threw an
  * exception), how many were "late" (replayed later than they should have been),
  * exception), how many were "late" (replayed later than they should have been),
  * and the latency (from client perspective) of each command. If there are a
  * and the latency (from client perspective) of each command. If there are a
  * large number of "late" commands, you likely need to increase the number of
  * large number of "late" commands, you likely need to increase the number of
  * threads used and/or the number of mappers.
  * threads used and/or the number of mappers.
- *
- * <p/>By default, commands will be replayed at the same rate as they were
+ * <p>
+ * By default, commands will be replayed at the same rate as they were
  * originally performed. However a rate factor can be specified via the
  * originally performed. However a rate factor can be specified via the
  * {@value RATE_FACTOR_KEY} configuration; all of the (relative) timestamps will
  * {@value RATE_FACTOR_KEY} configuration; all of the (relative) timestamps will
  * be divided by this rate factor, effectively changing the rate at which they
  * be divided by this rate factor, effectively changing the rate at which they