ソースを参照

HADOOP-14057. Fix package.html to compile with Java 9.

Akira Ajisaka 8 年 前
コミット
490abfb10f

+ 22 - 24
hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/package.html

@@ -39,24 +39,24 @@
     <li>Combine the job outputs and print the &pi; bits.</li>
     <li>Combine the job outputs and print the &pi; bits.</li>
   </ol>
   </ol>
 
 
-<table><tr valign=top><td width=420>
+<table summary="The Bits of Pi"><tr valign=top><td width=420>
   <h3>The Bits of &pi;</h3>
   <h3>The Bits of &pi;</h3>
   <p>
   <p>
 The table on the right are the results computed by distbbp.
 The table on the right are the results computed by distbbp.
 </p>
 </p>
 <ul>
 <ul>
-<p><li>Row 0 to Row 7
+<li>Row 0 to Row 7
 <ul><li>They were computed by a single machine.</li>
 <ul><li>They were computed by a single machine.</li>
 
 
     <li>A single run of Row 7 took several seconds.</li>
     <li>A single run of Row 7 took several seconds.</li>
-</ul></li></p>
-<p><li>Row 8 to Row 14
+</ul></li>
+<li>Row 8 to Row 14
 <ul><li>They were computed by a 7600-task-capacity cluster.</li>
 <ul><li>They were computed by a 7600-task-capacity cluster.</li>
     <li>A single run of Row 14 took 27 hours.</li>
     <li>A single run of Row 14 took 27 hours.</li>
     <li>The computations in Row 13 and Row 14 were completed on May 20, 2009. 
     <li>The computations in Row 13 and Row 14 were completed on May 20, 2009. 
         It seems that the corresponding bits were never computed before.</li>
         It seems that the corresponding bits were never computed before.</li>
-</ul></li></p>
-<p><li>The first part of Row 15 (<tt>6216B06</tt>)
+</ul></li>
+<li>The first part of Row 15 (<tt>6216B06</tt>)
 
 
 <ul><li>The first 30% of the computation was done in idle cycles of some 
 <ul><li>The first 30% of the computation was done in idle cycles of some 
         clusters spread over 20 days.</li>
         clusters spread over 20 days.</li>
@@ -68,8 +68,8 @@ The table on the right are the results computed by distbbp.
     <li>The result was posted in
     <li>The result was posted in
         <a href="http://yahoohadoop.tumblr.com/post/98338598026/hadoop-computes-the-10-15-1st-bit-of-π">this YDN blog</a>.</li>
         <a href="http://yahoohadoop.tumblr.com/post/98338598026/hadoop-computes-the-10-15-1st-bit-of-π">this YDN blog</a>.</li>
 
 
-</ul></li></p>
-<p><li>The second part of Row 15 (<tt>D3611</tt>)
+</ul></li>
+<li>The second part of Row 15 (<tt>D3611</tt>)
 <ul><li>The starting position is 1,000,000,000,000,053, totally 20 bits.</li>
 <ul><li>The starting position is 1,000,000,000,000,053, totally 20 bits.</li>
     <li>Two computations, at positions <i>n</i> and <i>n</i>+4, were performed.
     <li>Two computations, at positions <i>n</i> and <i>n</i>+4, were performed.
     <li>A single computation was divided into 14,000 jobs
     <li>A single computation was divided into 14,000 jobs
@@ -83,10 +83,10 @@ The table on the right are the results computed by distbbp.
         The last bit, the 1,000,000,000,000,072<sup>nd</sup> bit,
         The last bit, the 1,000,000,000,000,072<sup>nd</sup> bit,
         probably is the highest bit (or the least significant bit) of &pi;
         probably is the highest bit (or the least significant bit) of &pi;
         computed ever in the history.</li>
         computed ever in the history.</li>
-</ul></li></p>
-
+</ul></li>
+</ul>
 </td><td width=20></td><td>
 </td><td width=20></td><td>
-<table border=1 width=400 cellpadding=5>
+<table border=1 width=400 cellpadding=5 summary="Pi in hex">
 <tr><th width=30></th><th>Position <i>n</i></th><th>&pi; bits (in hex) starting at <i>n</i></th></tr>
 <tr><th width=30></th><th>Position <i>n</i></th><th>&pi; bits (in hex) starting at <i>n</i></th></tr>
 
 
 <tr><td align=right>0</td><td align=right>1</td><td><tt>243F6A8885A3</tt><sup>*</sup></td></tr>
 <tr><td align=right>0</td><td align=right>1</td><td><tt>243F6A8885A3</tt><sup>*</sup></td></tr>
@@ -110,29 +110,27 @@ The table on the right are the results computed by distbbp.
 <tr><td align=right>15</td><td align=right>1,000,000,000,000,001</td><td><tt>6216B06</tt> ... <tt>D3611</tt></td></tr>
 <tr><td align=right>15</td><td align=right>1,000,000,000,000,001</td><td><tt>6216B06</tt> ... <tt>D3611</tt></td></tr>
 </table>
 </table>
 
 
-<p><sup>*</sup>
+<sup>*</sup>
 By representing &pi; in decimal, hexadecimal and binary, we have
 By representing &pi; in decimal, hexadecimal and binary, we have
 
 
-<ul><table><tr>
+<table summary="Pi in various formats"><tr>
   <td>&pi;</td><td>=</td><td><tt>3.1415926535 8979323846 2643383279</tt> ...</td>
   <td>&pi;</td><td>=</td><td><tt>3.1415926535 8979323846 2643383279</tt> ...</td>
 </tr><tr>
 </tr><tr>
   <td></td><td>=</td><td><tt>3.243F6A8885 A308D31319 8A2E037073</tt> ...</td>
   <td></td><td>=</td><td><tt>3.243F6A8885 A308D31319 8A2E037073</tt> ...</td>
 </tr><tr>
 </tr><tr>
   <td></td><td>=</td><td><tt>11.0010010000 1111110110 1010100010</tt> ...</td>
   <td></td><td>=</td><td><tt>11.0010010000 1111110110 1010100010</tt> ...</td>
-
-</td></tr></table></ul>
+</tr></table>
 The first ten bits of &pi; are <tt>0010010000</tt>.
 The first ten bits of &pi; are <tt>0010010000</tt>.
-</p>
 </td></tr></table>
 </td></tr></table>
 
 
 
 
   <h3>Command Line Usages</h3>
   <h3>Command Line Usages</h3>
   The command line format is:
   The command line format is:
-  <ul><pre>
+  <pre>
 $ hadoop org.apache.hadoop.examples.pi.DistBbp \
 $ hadoop org.apache.hadoop.examples.pi.DistBbp \
-         &lt;b&gt; &lt;nThreads&gt; &lt;nJobs&gt; &lt;type&gt; &lt;nPart&gt; &lt;remoteDir&gt; &lt;localDir&gt;</pre></ul>
+         &lt;b&gt; &lt;nThreads&gt; &lt;nJobs&gt; &lt;type&gt; &lt;nPart&gt; &lt;remoteDir&gt; &lt;localDir&gt;</pre>
   And the parameters are:
   And the parameters are:
-  <ul><table>
+  <table summary="command line option">
     <tr>
     <tr>
       <td>&lt;b&gt;</td>
       <td>&lt;b&gt;</td>
       <td>The number of bits to skip, i.e. compute the (b+1)th position.</td>
       <td>The number of bits to skip, i.e. compute the (b+1)th position.</td>
@@ -159,14 +157,14 @@ $ hadoop org.apache.hadoop.examples.pi.DistBbp \
       <td>&lt;localDir&gt;</td>
       <td>&lt;localDir&gt;</td>
       <td>Local directory for storing output files.</td>
       <td>Local directory for storing output files.</td>
     </tr>
     </tr>
-  </table></ul>
+  </table>
    Note that it may take a long time to finish all the jobs when &lt;b&gt; is large.
    Note that it may take a long time to finish all the jobs when &lt;b&gt; is large.
    If the program is killed in the middle of the execution, the same command with
    If the program is killed in the middle of the execution, the same command with
    a different &lt;remoteDir&gt; can be used to resume the execution.  For example, suppose
    a different &lt;remoteDir&gt; can be used to resume the execution.  For example, suppose
    we use the following command to compute the (10^15+57)th bit of &pi;.
    we use the following command to compute the (10^15+57)th bit of &pi;.
-   <ul><pre>
+   <pre>
 $ hadoop org.apache.hadoop.examples.pi.DistBbp \
 $ hadoop org.apache.hadoop.examples.pi.DistBbp \
-	 1,000,000,000,000,056 20 1000 x 500 remote/a local/output</pre></ul>
+         1,000,000,000,000,056 20 1000 x 500 remote/a local/output</pre>
    It uses 20 threads to summit jobs so that there are at most 20 concurrent jobs.
    It uses 20 threads to summit jobs so that there are at most 20 concurrent jobs.
    Each sum (there are totally 14 sums) is partitioned into 1000 jobs.
    Each sum (there are totally 14 sums) is partitioned into 1000 jobs.
    The jobs will be executed in map-side or reduce-side.  Each job has 500 parts.
    The jobs will be executed in map-side or reduce-side.  Each job has 500 parts.
@@ -174,8 +172,8 @@ $ hadoop org.apache.hadoop.examples.pi.DistBbp \
    for storing output is local/output.  Depends on the cluster configuration,
    for storing output is local/output.  Depends on the cluster configuration,
    it may take many days to finish the entire execution.  If the execution is killed,
    it may take many days to finish the entire execution.  If the execution is killed,
    we may resume it by
    we may resume it by
-   <ul><pre>
+   <pre>
 $ hadoop org.apache.hadoop.examples.pi.DistBbp \
 $ hadoop org.apache.hadoop.examples.pi.DistBbp \
-         1,000,000,000,000,056 20 1000 x 500 remote/b local/output</pre></ul>
+         1,000,000,000,000,056 20 1000 x 500 remote/b local/output</pre>
 </body>
 </body>
 </html>
 </html>

+ 37 - 37
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/record/package.html

@@ -204,8 +204,8 @@ type := (ptype / ctype)
 ptype := ("byte" / "boolean" / "int" |
 ptype := ("byte" / "boolean" / "int" |
           "long" / "float" / "double"
           "long" / "float" / "double"
           "ustring" / "buffer")
           "ustring" / "buffer")
-ctype := (("vector" "<" type ">") /
-          ("map" "<" type "," type ">" ) ) / name)
+ctype := (("vector" "&lt;" type "&gt;") /
+          ("map" "&lt;" type "," type "&gt;" ) ) / name)
 </code></pre>
 </code></pre>
 
 
 A DDL file describes one or more record types. It begins with zero or
 A DDL file describes one or more record types. It begins with zero or
@@ -255,7 +255,7 @@ include "links.jr"
 module outlinks {
 module outlinks {
     class OutLinks {
     class OutLinks {
         ustring baseURL;
         ustring baseURL;
-        vector<links.Link> outLinks;
+        vector&lt;links.Link&gt; outLinks;
     };
     };
 }
 }
 </code></pre>
 </code></pre>
@@ -269,7 +269,7 @@ record description files as a mandatory argument and an
 optional language argument (the default is Java) --language or
 optional language argument (the default is Java) --language or
 -l. Thus a typical invocation would look like:
 -l. Thus a typical invocation would look like:
 <pre><code>
 <pre><code>
-$ rcc -l C++ <filename> ...
+$ rcc -l C++ &lt;filename&gt; ...
 </code></pre>
 </code></pre>
 
 
 
 
@@ -306,7 +306,7 @@ namespace hadoop {
 
 
   class IOError : public runtime_error {
   class IOError : public runtime_error {
   public:
   public:
-    explicit IOError(const std::string& msg);
+    explicit IOError(const std::string&amp; msg);
   };
   };
 
 
   class IArchive;
   class IArchive;
@@ -314,18 +314,18 @@ namespace hadoop {
 
 
   class RecordReader {
   class RecordReader {
   public:
   public:
-    RecordReader(InStream& in, RecFormat fmt);
+    RecordReader(InStream&amp; in, RecFormat fmt);
     virtual ~RecordReader(void);
     virtual ~RecordReader(void);
 
 
-    virtual void read(Record& rec);
+    virtual void read(Record&amp; rec);
   };
   };
 
 
   class RecordWriter {
   class RecordWriter {
   public:
   public:
-    RecordWriter(OutStream& out, RecFormat fmt);
+    RecordWriter(OutStream&amp; out, RecFormat fmt);
     virtual ~RecordWriter(void);
     virtual ~RecordWriter(void);
 
 
-    virtual void write(Record& rec);
+    virtual void write(Record&amp; rec);
   };
   };
 
 
 
 
@@ -337,10 +337,10 @@ namespace hadoop {
     virtual bool validate(void) const = 0;
     virtual bool validate(void) const = 0;
 
 
     virtual void
     virtual void
-    serialize(OArchive& oa, const std::string& tag) const = 0;
+    serialize(OArchive&amp; oa, const std::string&amp; tag) const = 0;
 
 
     virtual void
     virtual void
-    deserialize(IArchive& ia, const std::string& tag) = 0;
+    deserialize(IArchive&amp; ia, const std::string&amp; tag) = 0;
   };
   };
 }
 }
 </code></pre>
 </code></pre>
@@ -445,11 +445,11 @@ private:
   ...
   ...
 public:
 public:
 
 
-  std::string& getMyBuf() {
+  std::string&amp; getMyBuf() {
     return mMyBuf;
     return mMyBuf;
   };
   };
 
 
-  const std::string& getMyBuf() const {
+  const std::string&amp; getMyBuf() const {
     return mMyBuf;
     return mMyBuf;
   };
   };
   ...
   ...
@@ -474,7 +474,7 @@ and the testrec.jr file contains:
 include "inclrec.jr"
 include "inclrec.jr"
 module testrec {
 module testrec {
     class R {
     class R {
-        vector<float> VF;
+        vector&lt;float&gt; VF;
         RI            Rec;
         RI            Rec;
         buffer        Buf;
         buffer        Buf;
     };
     };
@@ -511,8 +511,8 @@ namespace inclrec {
     RI(void);
     RI(void);
     virtual ~RI(void);
     virtual ~RI(void);
 
 
-    virtual bool operator==(const RI& peer) const;
-    virtual bool operator<(const RI& peer) const;
+    virtual bool operator==(const RI&amp; peer) const;
+    virtual bool operator&lt;(const RI&amp; peer) const;
 
 
     virtual int32_t getI32(void) const { return I32; }
     virtual int32_t getI32(void) const { return I32; }
     virtual void setI32(int32_t v) { I32 = v; }
     virtual void setI32(int32_t v) { I32 = v; }
@@ -520,16 +520,16 @@ namespace inclrec {
     virtual double getD(void) const { return D; }
     virtual double getD(void) const { return D; }
     virtual void setD(double v) { D = v; }
     virtual void setD(double v) { D = v; }
 
 
-    virtual std::string& getS(void) const { return S; }
-    virtual const std::string& getS(void) const { return S; }
+    virtual std::string&amp; getS(void) const { return S; }
+    virtual const std::string&amp; getS(void) const { return S; }
 
 
     virtual std::string type(void) const;
     virtual std::string type(void) const;
     virtual std::string signature(void) const;
     virtual std::string signature(void) const;
 
 
   protected:
   protected:
 
 
-    virtual void serialize(hadoop::OArchive& a) const;
-    virtual void deserialize(hadoop::IArchive& a);
+    virtual void serialize(hadoop::OArchive&amp; a) const;
+    virtual void deserialize(hadoop::IArchive&amp; a);
   };
   };
 } // end namespace inclrec
 } // end namespace inclrec
 
 
@@ -552,7 +552,7 @@ namespace testrec {
 
 
   private:
   private:
 
 
-    std::vector<float> VF;
+    std::vector&lt;float&gt; VF;
     inclrec::RI        Rec;
     inclrec::RI        Rec;
     std::string        Buf;
     std::string        Buf;
 
 
@@ -561,20 +561,20 @@ namespace testrec {
     R(void);
     R(void);
     virtual ~R(void);
     virtual ~R(void);
 
 
-    virtual bool operator==(const R& peer) const;
-    virtual bool operator<(const R& peer) const;
+    virtual bool operator==(const R&amp; peer) const;
+    virtual bool operator&lt;(const R&amp; peer) const;
 
 
-    virtual std::vector<float>& getVF(void) const;
-    virtual const std::vector<float>& getVF(void) const;
+    virtual std::vector&lt;float&gt;&amp; getVF(void) const;
+    virtual const std::vector&lt;float&gt;&amp; getVF(void) const;
 
 
-    virtual std::string& getBuf(void) const ;
-    virtual const std::string& getBuf(void) const;
+    virtual std::string&amp; getBuf(void) const ;
+    virtual const std::string&amp; getBuf(void) const;
 
 
-    virtual inclrec::RI& getRec(void) const;
-    virtual const inclrec::RI& getRec(void) const;
+    virtual inclrec::RI&amp; getRec(void) const;
+    virtual const inclrec::RI&amp; getRec(void) const;
     
     
-    virtual bool serialize(hadoop::OutArchive& a) const;
-    virtual bool deserialize(hadoop::InArchive& a);
+    virtual bool serialize(hadoop::OutArchive&amp; a) const;
+    virtual bool deserialize(hadoop::InArchive&amp; a);
     
     
     virtual std::string type(void) const;
     virtual std::string type(void) const;
     virtual std::string signature(void) const;
     virtual std::string signature(void) const;
@@ -619,8 +619,8 @@ double          double              double
 ustring         std::string         java.lang.String
 ustring         std::string         java.lang.String
 buffer          std::string         org.apache.hadoop.record.Buffer
 buffer          std::string         org.apache.hadoop.record.Buffer
 class type      class type          class type
 class type      class type          class type
-vector<type>    std::vector<type>   java.util.ArrayList<type>
-map<type,type>  std::map<type,type> java.util.TreeMap<type,type>
+vector&lt;type&gt;    std::vector&lt;type&gt;   java.util.ArrayList&lt;type&gt;
+map&lt;type,type&gt;  std::map&lt;type,type&gt; java.util.TreeMap&lt;type,type&gt;
 </code></pre>
 </code></pre>
 
 
 <h2>Data encodings</h2>
 <h2>Data encodings</h2>
@@ -651,7 +651,7 @@ Primitive types are serialized as follows:
 <li> byte: Represented by 1 byte, as is.
 <li> byte: Represented by 1 byte, as is.
 <li> boolean: Represented by 1-byte (0 or 1)
 <li> boolean: Represented by 1-byte (0 or 1)
 <li> int/long: Integers and longs are serialized zero compressed.
 <li> int/long: Integers and longs are serialized zero compressed.
-Represented as 1-byte if -120 <= value < 128. Otherwise, serialized as a
+Represented as 1-byte if -120 &lt;= value &lt; 128. Otherwise, serialized as a
 sequence of 2-5 bytes for ints, 2-9 bytes for longs. The first byte represents
 sequence of 2-5 bytes for ints, 2-9 bytes for longs. The first byte represents
 the number of trailing bytes, N, as the negative number (-120-N). For example,
 the number of trailing bytes, N, as the negative number (-120-N). For example,
 the number 1024 (0x400) is represented by the byte sequence 'x86 x04 x00'.
 the number 1024 (0x400) is represented by the byte sequence 'x86 x04 x00'.
@@ -741,7 +741,7 @@ replace CRLF sequences with line feeds. Programming languages that we work
 with do not impose these restrictions on string types. To work around these
 with do not impose these restrictions on string types. To work around these
 restrictions, disallowed characters and CRs are percent escaped in strings.
 restrictions, disallowed characters and CRs are percent escaped in strings.
 The '%' character is also percent escaped.
 The '%' character is also percent escaped.
-<li> buffer: XML tag &lt;string&&gt;. Values: Arbitrary binary
+<li> buffer: XML tag &lt;string&gt;. Values: Arbitrary binary
 data. Represented as hexBinary, each byte is replaced by its 2-byte
 data. Represented as hexBinary, each byte is replaced by its 2-byte
 hexadecimal representation.
 hexadecimal representation.
 </ul>
 </ul>
@@ -755,7 +755,7 @@ element and a &lt;value&gt; element. The &lt;name&gt; is a string that must
 match /[a-zA-Z][a-zA-Z0-9_]*/. The value of the member is represented
 match /[a-zA-Z][a-zA-Z0-9_]*/. The value of the member is represented
 by a &lt;value&gt; element.
 by a &lt;value&gt; element.
 
 
-<li> vector: XML tag &lt;array&lt;. An &lt;array&gt; contains a
+<li> vector: XML tag &lt;array&gt;. An &lt;array&gt; contains a
 single &lt;data&gt; element. The &lt;data&gt; element is a sequence of
 single &lt;data&gt; element. The &lt;data&gt; element is a sequence of
 &lt;value&gt; elements each of which represents an element of the vector.
 &lt;value&gt; elements each of which represents an element of the vector.
 
 
@@ -768,7 +768,7 @@ For example:
 <pre><code>
 <pre><code>
 class {
 class {
   int           MY_INT;            // value 5
   int           MY_INT;            // value 5
-  vector<float> MY_VEC;            // values 0.1, -0.89, 2.45e4
+  vector&lt;float&gt; MY_VEC;            // values 0.1, -0.89, 2.45e4
   buffer        MY_BUF;            // value '\00\n\tabc%'
   buffer        MY_BUF;            // value '\00\n\tabc%'
 }
 }
 </code></pre>
 </code></pre>

+ 6 - 7
hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/typedbytes/package.html

@@ -25,8 +25,8 @@ Typed bytes are sequences of bytes in which the first byte is a type code. They
 <h3>Type Codes</h3>
 <h3>Type Codes</h3>
 
 
 Each typed bytes sequence starts with an unsigned byte that contains the type code. Possible values are:
 Each typed bytes sequence starts with an unsigned byte that contains the type code. Possible values are:
-<p>
-<table border="1" cellpadding="2">
+
+<table border="1" cellpadding="2" summary="Type Codes">
 <tr><th>Code</th><th>Type</th></tr>
 <tr><th>Code</th><th>Type</th></tr>
 <tr><td><i>0</i></td><td>A sequence of bytes.</td></tr>
 <tr><td><i>0</i></td><td>A sequence of bytes.</td></tr>
 <tr><td><i>1</i></td><td>A byte.</td></tr>
 <tr><td><i>1</i></td><td>A byte.</td></tr>
@@ -40,19 +40,19 @@ Each typed bytes sequence starts with an unsigned byte that contains the type co
 <tr><td><i>9</i></td><td>A list.</td></tr>
 <tr><td><i>9</i></td><td>A list.</td></tr>
 <tr><td><i>10</i></td><td>A map.</td></tr>
 <tr><td><i>10</i></td><td>A map.</td></tr>
 </table>
 </table>
-</p>
+
 The type codes <i>50</i> to <i>200</i> are treated as aliases for <i>0</i>, and can thus be used for
 The type codes <i>50</i> to <i>200</i> are treated as aliases for <i>0</i>, and can thus be used for
 application-specific serialization.
 application-specific serialization.
 
 
 <h3>Subsequent Bytes</h3>
 <h3>Subsequent Bytes</h3>
 
 
 These are the subsequent bytes for the different type codes (everything is big-endian and unpadded):
 These are the subsequent bytes for the different type codes (everything is big-endian and unpadded):
-<p>
-<table border="1" cellpadding="2">
+
+<table border="1" cellpadding="2" summary="Subsequent Bytes">
 <tr><th>Code</th><th>Subsequent Bytes</th></tr>
 <tr><th>Code</th><th>Subsequent Bytes</th></tr>
 <tr><td><i>0</i></td><td>&lt;32-bit signed integer&gt; &lt;as many bytes as indicated by the integer&gt;</td></tr>
 <tr><td><i>0</i></td><td>&lt;32-bit signed integer&gt; &lt;as many bytes as indicated by the integer&gt;</td></tr>
 <tr><td><i>1</i></td><td>&lt;signed byte&gt;</td></tr>
 <tr><td><i>1</i></td><td>&lt;signed byte&gt;</td></tr>
-<tr><td><i>2</i></td><td>&lt;signed byte (<i>0 = <i>false</i> and <i>1</i> = <i>true</i>)&gt;</td></tr>
+<tr><td><i>2</i></td><td>&lt;signed byte (<i>0</i> = <i>false</i> and <i>1</i> = <i>true</i>)&gt;</td></tr>
 <tr><td><i>3</i></td><td>&lt;32-bit signed integer&gt;</td></tr>
 <tr><td><i>3</i></td><td>&lt;32-bit signed integer&gt;</td></tr>
 <tr><td><i>4</i></td><td>&lt;64-bit signed integer&gt;</td></tr>
 <tr><td><i>4</i></td><td>&lt;64-bit signed integer&gt;</td></tr>
 <tr><td><i>5</i></td><td>&lt;32-bit IEEE floating point number&gt;</td></tr>
 <tr><td><i>5</i></td><td>&lt;32-bit IEEE floating point number&gt;</td></tr>
@@ -62,7 +62,6 @@ These are the subsequent bytes for the different type codes (everything is big-e
 <tr><td><i>9</i></td><td>&lt;variable number of typed bytes sequences&gt; &lt;<i>255</i> written as an unsigned byte&gt;</td></tr>
 <tr><td><i>9</i></td><td>&lt;variable number of typed bytes sequences&gt; &lt;<i>255</i> written as an unsigned byte&gt;</td></tr>
 <tr><td><i>10</i></td><td>&lt;32-bit signed integer&gt; &lt;as many (key-value) pairs of typed bytes sequences as indicated by the integer&gt;</td></tr>
 <tr><td><i>10</i></td><td>&lt;32-bit signed integer&gt; &lt;as many (key-value) pairs of typed bytes sequences as indicated by the integer&gt;</td></tr>
 </table>
 </table>
-</p>
 
 
 </body>
 </body>
 </html>
 </html>