Parquet SerDe를 사용하여 Parquet 데이터에서 Athena 테이블을 생성합니다.
Parquet SerDe는 Parquet 형식
직렬화 라이브러리 이름
Parquet SerDe의 직렬화 라이브러리 이름은 org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
입니다. 소스 코드 정보는 Apache 설명서의 Class ParquetHiveSerDe
참고
s3://athena-examples-
의 myregion
/path/to/data/myregion
을, Athena를 실행하는 리전 식별자로 바꿉니다(예: s3://athena-examples-us-west-1/path/to/data/
).
다음 CREATE TABLE
문을 사용하여 Amazon S3에 Parquet 형식으로 저장된 기본 데이터로 Athena 테이블을 만듭니다.
CREATE EXTERNAL TABLE flight_delays_pq (
yr INT,
quarter INT,
month INT,
dayofmonth INT,
dayofweek INT,
flightdate STRING,
uniquecarrier STRING,
airlineid INT,
carrier STRING,
tailnum STRING,
flightnum STRING,
originairportid INT,
originairportseqid INT,
origincitymarketid INT,
origin STRING,
origincityname STRING,
originstate STRING,
originstatefips STRING,
originstatename STRING,
originwac INT,
destairportid INT,
destairportseqid INT,
destcitymarketid INT,
dest STRING,
destcityname STRING,
deststate STRING,
deststatefips STRING,
deststatename STRING,
destwac INT,
crsdeptime STRING,
deptime STRING,
depdelay INT,
depdelayminutes INT,
depdel15 INT,
departuredelaygroups INT,
deptimeblk STRING,
taxiout INT,
wheelsoff STRING,
wheelson STRING,
taxiin INT,
crsarrtime INT,
arrtime STRING,
arrdelay INT,
arrdelayminutes INT,
arrdel15 INT,
arrivaldelaygroups INT,
arrtimeblk STRING,
cancelled INT,
cancellationcode STRING,
diverted INT,
crselapsedtime INT,
actualelapsedtime INT,
airtime INT,
flights INT,
distance INT,
distancegroup INT,
carrierdelay INT,
weatherdelay INT,
nasdelay INT,
securitydelay INT,
lateaircraftdelay INT,
firstdeptime STRING,
totaladdgtime INT,
longestaddgtime INT,
divairportlandings INT,
divreacheddest INT,
divactualelapsedtime INT,
divarrdelay INT,
divdistance INT,
div1airport STRING,
div1airportid INT,
div1airportseqid INT,
div1wheelson STRING,
div1totalgtime INT,
div1longestgtime INT,
div1wheelsoff STRING,
div1tailnum STRING,
div2airport STRING,
div2airportid INT,
div2airportseqid INT,
div2wheelson STRING,
div2totalgtime INT,
div2longestgtime INT,
div2wheelsoff STRING,
div2tailnum STRING,
div3airport STRING,
div3airportid INT,
div3airportseqid INT,
div3wheelson STRING,
div3totalgtime INT,
div3longestgtime INT,
div3wheelsoff STRING,
div3tailnum STRING,
div4airport STRING,
div4airportid INT,
div4airportseqid INT,
div4wheelson STRING,
div4totalgtime INT,
div4longestgtime INT,
div4wheelsoff STRING,
div4tailnum STRING,
div5airport STRING,
div5airportid INT,
div5airportseqid INT,
div5wheelson STRING,
div5totalgtime INT,
div5longestgtime INT,
div5wheelsoff STRING,
div5tailnum STRING
)
PARTITIONED BY (year STRING)
STORED AS PARQUET
LOCATION 's3://athena-examples-myregion
/flight/parquet/'
tblproperties ("parquet.compression"="SNAPPY");
테이블에서 MSCK REPAIR TABLE
문을 실행해 파티션 메타데이터를 새로 고칩니다:
MSCK REPAIR TABLE flight_delays_pq;
1시간 이상 지연된 상위 10개 경로를 쿼리합니다.
SELECT origin, dest, count(*) as delays
FROM flight_delays_pq
WHERE depdelayminutes > 60
GROUP BY origin, dest
ORDER BY 3 DESC
LIMIT 10;
Parquet 통계 무시
Parquet 데이터를 읽을 때 다음과 같은 오류 메시지가 나타날 수 있습니다.
HIVE_CANNOT_OPEN_SPLIT: Index x out of bounds for length y HIVE_CURSOR_ERROR: Failed to read x bytes HIVE_CURSOR_ERROR: FailureException at Malformed input: offset=x HIVE_CURSOR_ERROR: FailureException at java.io.IOException: can not read class org.apache.parquet.format.PageHeader: Socket is closed by peer.
이 문제를 해결하려면 다음 예제와 같이 CREATE TABLE 또는 ALTER TABLE SET
TBLPROPERTIES 문을 사용하여 Parquet SerDe parquet.ignore.statistics
속성을 true
로 설정합니다.
CREATE TABLE 예제
...
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
'parquet.ignore.statistics'='true')
STORED AS PARQUET
...
ALTER TABLE 예제
ALTER TABLE ... SET TBLPROPERTIES ('parquet.ignore.statistics'='true')