Release 348 (14 Dec 2020)#
General#
Add support for
DISTINCTclause in aggregations within correlated subqueries. (#5904)Support
SHOW STATSfor arbitrary queries. (#3109)Improve query performance by reducing worker to worker communication overhead. (#6126)
Improve performance of
ORDER BY ... LIMITqueries. (#6072)Reduce memory pressure and improve performance of queries involving joins. (#6176)
Fix
EXPLAIN ANALYZEfor certain queries that contain broadcast join. (#6115)Fix planning failures for queries that contain outer joins and aggregations using
FILTER (WHERE <condition>)syntax. (#6141)Fix incorrect results when correlated subquery in join contains aggregation functions such as
array_aggorchecksum. (#6145)Fix incorrect query results when using
timestamp with time zoneconstants with precision higher than 3 describing same point in time but in different zones. (#6318)Fix duplicate query completion events if query fails early. (#6103)
Fix query failure when views are accessed and current session does not specify default schema and catalog. (#6294)
Web UI#
JDBC driver#
Allow reading
timestamp with time zonevalue asZonedDateTimeusingResultSet.getObject(int column, Class<?> type)method. (#307)Accept
java.time.LocalDateinPreparedStatement.setObject(int, Object). (#6301)Extend
PreparedStatement.setObject(int, Object, int)to allow settingtimeandtimestampvalues with precision higher than nanoseconds. (#6300) This can be done via providing aStringvalue representing a valid SQL literal.Change representation of a
rowvalue.ResultSet.getObjectnow returns an instance ofio.prestosql.jdbc.Rowclass, which better represents the returned value. Previously arowvalue was represented as aMapinstance, with unnamed fields being named likefield0,field1, etc. You can access the previous behavior by invokinggetObject(column, Map.class)on theResultSetobject. (#4588)Represent
varbinaryvalue using hex string representation inResultSet.getString. Previously the return value was useless, similar to"B@2de82bf8". (#6247)Report precision of the
time(p),time(p) with time zone,timestamp(p)andtimestamp(p) with time zonein theDECIMAL_DIGITScolumn in the result set returned fromDatabaseMetaData#getColumns. (#6307)Fix the value of the
DATA_TYPEcolumn fortime(p)andtime(p) with time zonein the result set returned fromDatabaseMetaData#getColumns. (#6307)Fix failure when reading a
timestamportimestamp with time zonevalue with seconds fraction greater than or equal to 999999999500 picoseconds. (#6147)Fix failure when reading a
timevalue with seconds fraction greater than or equal to 999999999500 picoseconds. (#6204)Fix element representation in arrays returned from
ResultSet.getArray, making it consistent withResultSet.getObject. Previously the elements were represented using internal client representation (e.g.String). (#6048)Fix
ResultSetMetaData.getColumnTypefortimestamp with time zone. Previously the type was miscategorized asjava.sql.Types.TIMESTAMP. (#6251)Fix
ResultSetMetaData.getColumnTypefortime with time zone. Previously the type was miscategorized asjava.sql.Types.TIME. (#6251)Fix failure when an instance of
SphericalGeographygeospatial type is returned in theResultSet. (#6240)
CLI#
Hive connector#
Allow configuring S3 endpoint in security mapping. (#3869)
Add support for S3 streaming uploads. Data is uploaded to S3 as it is written, rather than staged to a local temporary file. This feature is disabled by default, and can be enabled using the
hive.s3.streaming.enabledconfiguration property. (#3712, #6201)Reduce load on metastore when background cache refresh is enabled. (#6101, #6156)
Verify that data is in the correct bucket file when reading bucketed tables. This is enabled by default, as incorrect bucketing can cause incorrect query results, but can be disabled using the
hive.validate-bucketingconfiguration property or thevalidate_bucketingsession property. (#6012)Allow fallback to legacy Hive view translation logic via
hive.legacy-hive-view-translationconfig property orlegacy_hive_view_translationsession property. (#6195)Add deserializer class name to split information exposed to the event listener. (#6006)
Improve performance when querying tables that contain symlinks. (#6158, #6213)
Iceberg connector#
Kafka connector#
Allow writing
timestamp with time zonevalues into columns usingmilliseconds-since-epochorseconds-since-epochJSON encoders. (#6074)