@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class ParquetSerDe extends Object implements Serializable, Cloneable, StructuredPojo
A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet.
| Constructor and Description | 
|---|
| ParquetSerDe() | 
| Modifier and Type | Method and Description | 
|---|---|
| ParquetSerDe | clone() | 
| boolean | equals(Object obj) | 
| Integer | getBlockSizeBytes()
 The Hadoop Distributed File System (HDFS) block size. | 
| String | getCompression()
 The compression code to use over data blocks. | 
| Boolean | getEnableDictionaryCompression()
 Indicates whether to enable dictionary compression. | 
| Integer | getMaxPaddingBytes()
 The maximum amount of padding to apply. | 
| Integer | getPageSizeBytes()
 The Parquet page size. | 
| String | getWriterVersion()
 Indicates the version of row format to output. | 
| int | hashCode() | 
| Boolean | isEnableDictionaryCompression()
 Indicates whether to enable dictionary compression. | 
| void | marshall(ProtocolMarshaller protocolMarshaller)Marshalls this structured data using the given  ProtocolMarshaller. | 
| void | setBlockSizeBytes(Integer blockSizeBytes)
 The Hadoop Distributed File System (HDFS) block size. | 
| void | setCompression(String compression)
 The compression code to use over data blocks. | 
| void | setEnableDictionaryCompression(Boolean enableDictionaryCompression)
 Indicates whether to enable dictionary compression. | 
| void | setMaxPaddingBytes(Integer maxPaddingBytes)
 The maximum amount of padding to apply. | 
| void | setPageSizeBytes(Integer pageSizeBytes)
 The Parquet page size. | 
| void | setWriterVersion(String writerVersion)
 Indicates the version of row format to output. | 
| String | toString()Returns a string representation of this object. | 
| ParquetSerDe | withBlockSizeBytes(Integer blockSizeBytes)
 The Hadoop Distributed File System (HDFS) block size. | 
| ParquetSerDe | withCompression(ParquetCompression compression)
 The compression code to use over data blocks. | 
| ParquetSerDe | withCompression(String compression)
 The compression code to use over data blocks. | 
| ParquetSerDe | withEnableDictionaryCompression(Boolean enableDictionaryCompression)
 Indicates whether to enable dictionary compression. | 
| ParquetSerDe | withMaxPaddingBytes(Integer maxPaddingBytes)
 The maximum amount of padding to apply. | 
| ParquetSerDe | withPageSizeBytes(Integer pageSizeBytes)
 The Parquet page size. | 
| ParquetSerDe | withWriterVersion(ParquetWriterVersion writerVersion)
 Indicates the version of row format to output. | 
| ParquetSerDe | withWriterVersion(String writerVersion)
 Indicates the version of row format to output. | 
public void setBlockSizeBytes(Integer blockSizeBytes)
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
blockSizeBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from
        Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this
        value for padding calculations.public Integer getBlockSizeBytes()
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
public ParquetSerDe withBlockSizeBytes(Integer blockSizeBytes)
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
blockSizeBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from
        Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this
        value for padding calculations.public void setPageSizeBytes(Integer pageSizeBytes)
The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
pageSizeBytes - The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit
        (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.public Integer getPageSizeBytes()
The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
public ParquetSerDe withPageSizeBytes(Integer pageSizeBytes)
The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
pageSizeBytes - The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit
        (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.public void setCompression(String compression)
 The compression code to use over data blocks. The possible values are UNCOMPRESSED,
 SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY
 for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
 
compression - The compression code to use over data blocks. The possible values are UNCOMPRESSED,
        SNAPPY, and GZIP, with the default being SNAPPY. Use
        SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more
        important than speed.ParquetCompressionpublic String getCompression()
 The compression code to use over data blocks. The possible values are UNCOMPRESSED,
 SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY
 for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
 
UNCOMPRESSED,
         SNAPPY, and GZIP, with the default being SNAPPY. Use
         SNAPPY for higher decompression speed. Use GZIP if the compression ratio is
         more important than speed.ParquetCompressionpublic ParquetSerDe withCompression(String compression)
 The compression code to use over data blocks. The possible values are UNCOMPRESSED,
 SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY
 for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
 
compression - The compression code to use over data blocks. The possible values are UNCOMPRESSED,
        SNAPPY, and GZIP, with the default being SNAPPY. Use
        SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more
        important than speed.ParquetCompressionpublic ParquetSerDe withCompression(ParquetCompression compression)
 The compression code to use over data blocks. The possible values are UNCOMPRESSED,
 SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY
 for higher decompression speed. Use GZIP if the compression ratio is more important than speed.
 
compression - The compression code to use over data blocks. The possible values are UNCOMPRESSED,
        SNAPPY, and GZIP, with the default being SNAPPY. Use
        SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more
        important than speed.ParquetCompressionpublic void setEnableDictionaryCompression(Boolean enableDictionaryCompression)
Indicates whether to enable dictionary compression.
enableDictionaryCompression - Indicates whether to enable dictionary compression.public Boolean getEnableDictionaryCompression()
Indicates whether to enable dictionary compression.
public ParquetSerDe withEnableDictionaryCompression(Boolean enableDictionaryCompression)
Indicates whether to enable dictionary compression.
enableDictionaryCompression - Indicates whether to enable dictionary compression.public Boolean isEnableDictionaryCompression()
Indicates whether to enable dictionary compression.
public void setMaxPaddingBytes(Integer maxPaddingBytes)
The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
maxPaddingBytes - The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to
        HDFS before querying. The default is 0.public Integer getMaxPaddingBytes()
The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
public ParquetSerDe withMaxPaddingBytes(Integer maxPaddingBytes)
The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
maxPaddingBytes - The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to
        HDFS before querying. The default is 0.public void setWriterVersion(String writerVersion)
 Indicates the version of row format to output. The possible values are V1 and V2. The
 default is V1.
 
writerVersion - Indicates the version of row format to output. The possible values are V1 and V2
        . The default is V1.ParquetWriterVersionpublic String getWriterVersion()
 Indicates the version of row format to output. The possible values are V1 and V2. The
 default is V1.
 
V1 and
         V2. The default is V1.ParquetWriterVersionpublic ParquetSerDe withWriterVersion(String writerVersion)
 Indicates the version of row format to output. The possible values are V1 and V2. The
 default is V1.
 
writerVersion - Indicates the version of row format to output. The possible values are V1 and V2
        . The default is V1.ParquetWriterVersionpublic ParquetSerDe withWriterVersion(ParquetWriterVersion writerVersion)
 Indicates the version of row format to output. The possible values are V1 and V2. The
 default is V1.
 
writerVersion - Indicates the version of row format to output. The possible values are V1 and V2
        . The default is V1.ParquetWriterVersionpublic String toString()
toString in class ObjectObject.toString()public ParquetSerDe clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojoProtocolMarshaller.marshall in interface StructuredPojoprotocolMarshaller - Implementation of ProtocolMarshaller used to marshall this object's data.