You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Microsoft SQL Server 2017 (RTM-CU31-GDR) (KB5021126) - 14.0.3460.9 (X64) Jan 25 2023 08:42:43 Copyright (C) 2017 Microsoft Corporation Developer Edition (64-bit) on Windows Server 2012 R2 Datacenter 6.3 (Build 9600: ) (Hypervisor)
Client Operating System
Windows 10
JAVA/JVM version
11.0.13 (Eclipse Adoptium 11.0.13+8)
Table schema
Table with multiple decimal types, for example: decimal(18, 2)
Problem description
The decimal Typedefinition for each batch data is relevant for the plan creation in the database.
Ich different decimal Typedefinition cause the generation of new Plan.
In 7.4.1 and higher to 12.2 (exclusive) the scale is used from the Scale of the BigDecimal and the precision was ever 38.
The scale for an BigDecimal can be resized (setScale(int)) to a new one without real change of the value. So it is possible to know which maximal scale is to be use, so this constellation ist solvable. I think probably also the Problems from Fix for bigDecimal values between 0 and 1 having too high of a precision #2116/ BigDecimal Precision/Scale Fix #1912 could be solved from caller(user of jdbc driver) with that.
But with mssql jdbc Driver 12.2.0 also the pricision could be flip, so different Plans are created.
But precision in BigDecimal ist not influenceable.
Next Solution possibility for Caller:
The SQLServerPreparedStatement has the own (not from java.sql.PreparedStatement) Method:
But also this don't change the precision Typedefinition.
Expected behavior
Posiblitity (or automatic) to influence the real Typedefinition with the maximal precition and scale for the parameter.
Actual behavior
no Posiblitity
"useBulkCopyForBatchInsert" is also not usable, because in the tables are mostly one DATE column.
Any other details that can be helpful
background information:
For an migration wie insert (using PrepareStatement) many rows in different tables. For some Tables we see an throughput decrease.
In my Test migration:
(JDBC-Driver: 7.4.1) with ~31 millions inserts (same prepare SQL with 61 columns), with one executeBatch call (with own Transaction) per 5000 rows.
I become overtime ~8600 different Plans and also many recompiles. I think for so many variants of Plans, it was not possible to stay in memory for reuse.
For 31.3 millons inserts and 8600 plans there was 13.9 Millions compiles for the same prepare. I calculate with the compile time average from Querystore data, 8.88 hours only for compile the same prepeare SQL
change for com.microsoft.sqlserver.jdbc.Parameter.java
With this change it was possible to influence the real precision with the Method:
But if you have an idea without using driver specific Interface, your are wellcome.
perhaps if there is an property for the old 38 precision behaivor ...
diff --git a/src/main/java/com/microsoft/sqlserver/jdbc/Parameter.java b/src/main/java/com/microsoft/sqlserver/jdbc/Parameter.java
index 1266fdac..fa835492 100644
--- a/src/main/java/com/microsoft/sqlserver/jdbc/Parameter.java+++ b/src/main/java/com/microsoft/sqlserver/jdbc/Parameter.java@@ -254,14 +254,22 @@ final class Parameter {
Calendar calendar, Integer precision, Integer scale, SQLServerConnection con, boolean forceEncrypt,
SQLServerStatementColumnEncryptionSetting stmtColumnEncriptionSetting, int parameterIndex, String userSQL,
String tvpName) throws SQLServerException {
++ if (javaType == JavaType.BIGDECIMAL+ || shouldHonorAEForParameter) {++ // local scale would later set in DTV and use from there in setTypeDefinition(DTV) in GetTypeDefinitionOP+ if (null != precision) {+ this.userProvidesPrecision = true;+ this.valueLength = precision.intValue();+ } else {+ this.userProvidesPrecision = false;+ }+ }
if (shouldHonorAEForParameter) {
- userProvidesPrecision = false;
userProvidesScale = false;
- if (null != precision) {- userProvidesPrecision = true;- }
if (null != scale) {
userProvidesScale = true;
@@ -530,22 +538,30 @@ final class Parameter {
}
} else {
BigDecimal bigDecimal = null;
+ int typDefinitionLength = userProvidesPrecision ? valueLength : 0;+ int typDefinitionScale = Math.max(scale, 0);+
if (dtv.getJavaType() == JavaType.BIGDECIMAL
&& null != (bigDecimal = (BigDecimal) dtv.getSetterValue())) {
String[] plainValueArray = bigDecimal.abs().toPlainString().split("\\.");
- param.typeDefinition = SSType.DECIMAL.toString() + "(" ++
// Precision
- (plainValueArray.length == 2 ? plainValueArray[0].length()- + plainValueArray[1].length() : plainValueArray[0].length())- + "," +- // Scale- (plainValueArray.length == 2 ? plainValueArray[1].length() : 0) + ")";+ if (typDefinitionLength == 0)+ typDefinitionLength = plainValueArray.length == 2 ? plainValueArray[0].length()+ + plainValueArray[1].length()+ : plainValueArray[0].length();++ // Scale+ if (typDefinitionScale == 0)+ typDefinitionScale = plainValueArray.length == 2 ? plainValueArray[1].length() : 0;
} else {
- param.typeDefinition = SSType.DECIMAL.toString() + "("- + SQLServerConnection.maxDecimalPrecision + "," + scale + ")";+ typDefinitionLength = SQLServerConnection.maxDecimalPrecision;
}
+ param.typeDefinition = SSType.DECIMAL.toString() + "("+ + typDefinitionLength + "," + typDefinitionScale + ")";+
}
break;
The text was updated successfully, but these errors were encountered:
Driver version
12.2.0
SQL Server version
Microsoft SQL Server 2017 (RTM-CU31-GDR) (KB5021126) - 14.0.3460.9 (X64) Jan 25 2023 08:42:43 Copyright (C) 2017 Microsoft Corporation Developer Edition (64-bit) on Windows Server 2012 R2 Datacenter 6.3 (Build 9600: ) (Hypervisor)
Client Operating System
Windows 10
JAVA/JVM version
11.0.13 (Eclipse Adoptium 11.0.13+8)
Table schema
Table with multiple decimal types, for example: decimal(18, 2)
Problem description
The decimal Typedefinition for each batch data is relevant for the plan creation in the database.
Ich different decimal Typedefinition cause the generation of new Plan.
The scale for an BigDecimal can be resized (setScale(int)) to a new one without real change of the value. So it is possible to know which maximal scale is to be use, so this constellation ist solvable. I think probably also the Problems from Fix for bigDecimal values between 0 and 1 having too high of a precision #2116/ BigDecimal Precision/Scale Fix #1912 could be solved from caller(user of jdbc driver) with that.
But with mssql jdbc Driver 12.2.0 also the pricision could be flip, so different Plans are created.
But precision in BigDecimal ist not influenceable.
Next Solution possibility for Caller:
The SQLServerPreparedStatement has the own (not from java.sql.PreparedStatement) Method:
But also this don't change the precision Typedefinition.
Expected behavior
Posiblitity (or automatic) to influence the real Typedefinition with the maximal precition and scale for the parameter.
Actual behavior
no Posiblitity
"useBulkCopyForBatchInsert" is also not usable, because in the tables are mostly one DATE column.
Any other details that can be helpful
background information:
For an migration wie insert (using PrepareStatement) many rows in different tables. For some Tables we see an throughput decrease.
In my Test migration:
(JDBC-Driver: 7.4.1) with ~31 millions inserts (same prepare SQL with 61 columns), with one executeBatch call (with own Transaction) per 5000 rows.
I become overtime ~8600 different Plans and also many recompiles. I think for so many variants of Plans, it was not possible to stay in memory for reuse.
For 31.3 millons inserts and 8600 plans there was 13.9 Millions compiles for the same prepare. I calculate with the compile time average from Querystore data, 8.88 hours only for compile the same prepeare SQL
change for com.microsoft.sqlserver.jdbc.Parameter.java
With this change it was possible to influence the real precision with the Method:
But if you have an idea without using driver specific Interface, your are wellcome.
perhaps if there is an property for the old 38 precision behaivor ...
The text was updated successfully, but these errors were encountered: