v1.0.0
版本发布时间: 2024-03-02 00:02:19
DataLinkDC/dinky最新发布版本:v1.1.0(2024-07-30 22:03:38)
Dinky-1.0.0 Release Note
Upgrade Instructions
- Dinky 1.0 is a refactored version that restructures existing functions, adds several enterprise-level functions, and fixes some limitations of 0.7. There is currently no direct upgrade from 0.7 to 1.0. It is recommended to redeploy version 1.0.
- Starting from Dinky 1.0, the Dinky community will no longer maintain all versions before 1.0.
- Starting from Dinky version 1.0, the Dinky community will provide support for Flink 1.14.x and above, and will no longer maintain Flink versions below 1.14. At the same time, Flink has added some new features, which Dinky will gradually support.
- Dinky 1.0 and later versions, every time Flink adds a new major version, Dinky will also add a new major version, and at the same time, a Dinky-Client version will be eliminated depending on the situation. Deleted versions may be subject to a vote, and the results of the vote determine the deleted version.
- Four RC versions have been released successively during the reconstruction process. The RC version can be upgraded, but it is still recommended to redeploy the 1.0-RELEASE version. Avoid some location issues.
- Users of Dinky version 0.7 can continue to use version 0.7, but no maintenance and support will be provided. It is recommended to install version 1.0 as soon as possible.
The changes from version 0.7 to version 1.0 are relatively large, and there are some incompatible changes. Users using version 0.7 cannot directly upgrade to version 1.0. It is recommended to redeploy version 1.0.
Incompatible changes
- CDCSOURCE dynamic variable definition changed from
${}
to#{}
- Global variables such as
_CURRENT_DATE_
are removed and replaced by expression variables - Flink Jar task definition is changed from form to EXECUTE JAR syntax
- The definition of dinky-app-xxxx.jar in Application mode is moved to the cluster configuration
- The database DDL part is not compatible with upgrades
- The type attribute of Dinky's built-in Catalog is changed from
dlink_catalog
todinky_catalog
Refactoring
- Reconstruct data development
- Reconstruct the operation and maintenance center
- Reconstruct the registration center
- Reconstruct the Flink task submission process
- Reconstruct the Flink Jar task submission method
- Reconstruct CDCSOURCE entire library synchronization code architecture
- Reconstruct Flink task monitoring and alarming
- Reconstruct permission management
- Reconstruct system configuration to online configuration
- Refactor push DolphinScheduler
- Reconstruct the packaging method
new function
- Data development supports code snippet prompts
- Support real-time printing of Flink table data
- Console real-time printing task submission log
- Support Flink CDC 3.0 entire database synchronization
- Support custom alarm rules and custom alarm templates
- Support Flink k8s operator submission
- Support proxy Flink webui access
- Added Flink task Metrics to monitor custom charts
- Support Dinky jvm monitoring
- Added resource center functions (local, hdfs, oss) and expanded rs protocol
- Added Git UDF/JAR project hosting and overall construction process
- Supports full-mode Flink jar task submission
- Added ADD CUSTOMJAR syntax to dynamically load dependencies
- Added ADD FILE syntax to dynamically load files
- openapi supports custom parameter submission
- Permission system upgrade to support tenants, roles, tokens, and menu permissions
- Support LDAP
- Added new widget function to the data development page
- Support pushing dependent tasks to DolphinScheduler
- Implement the Flink instance stopping function
- Implement CDCSOURCE synchronization of the entire database and ordering of data under multiple degrees of parallelism
- Implement configurable alarm retransmission prevention function
- Implement ordinary SQL that can be scheduled and executed by DolphinScheduler
- Added the ability to obtain dependent JARs loaded in the system and group them into groups to facilitate troubleshooting JAR related issues
- Implement cluster configuration test connection function
- Support H2, Mysql, Postgre deployment, the default is H2
New syntax
- CREATE TEMPORAL FUNCTION is used to define temporary table functions
- ADD FILE is used to dynamically load class/configuration and other files
- ADD CUSTOMJAR is used to dynamically load JAR dependencies
- PRINT TABLE for real-time preview of table data
- EXECUTE JAR is used to define Flink Jar tasks
- EXECUTE PIPELINE is used to define Flink CDC 3.x entire library synchronization tasks
Fix
- Fixed the problem of missing extends path in CLASS_PATH of auto.sh
- Fixed the problem that the job list life cycle status value was not re-rendered after release/offline
- Fixed Flink 1.18 set syntax not working and producing null error
- Fixed the save point mechanism issue of submission history
- Fixed the problem of creating views in Dinky Catalog
- Fixed Flink application not throwing exception
- Fixed incorrect rendering of alarm options
- Fixed job life cycle issues
- Fixed the problem that k8s YAML cannot be displayed in cluster configuration
- Fixed a time-consuming formatting error in the operation and maintenance center job list
- Fixed the problem of Flink dag prompt box
- Fixed checkpoint path not found
- Fixed node location error when pushing jobs to Dolphin Scheduler
- Fixed the problem that job parameters did not take effect when the set configuration contained single quotes
- Upgrade jmx_prometheus_javaagent to 0.20.0 to resolve some CVEs
- Fixed checkpoint display problem
- Repair job instance is always running
- Fixed the problem of missing log printing after Yarn Application failed to submit a task
- Fixed the problem that job configuration cannot render yarn prejob cluster
- Fixed URL misspelling causing request failure
- Fixed the problem of inserting the same token value when multiple users log in
- Fixed alarm instance form rendering issue
- Fixed the problem that FlinkSQLEnv could not be checked
- Fixed the problem that set statement could not take effect
- Fixed the problem of invalid yarn cluster configuration, customized Flink and hadoop configuration
- Fixed the problem that the checkpoint information of the operation and maintenance center cannot be obtained
- Fixed the problem that the status cannot be detected after the Yarn Application job is completed
- Fixed the problem of no printing in the console log when yarn job submission failed
- Fixed the issue where Flink instances started from cluster configuration cannot be selected in job configuration
- Fixed RECONNECT status job status recognition error
- Fixed an issue with FlinkJar tasks being submitted to PreJob mode
- Fixed Dinky startup detection pid problem
- Fixed the problem that caused conflicts when the built-in Paimon version was inconsistent with the user integrated version (implemented using shader)
- Fixed the problem that the CheckPoint parameter of the FlinkJar task does not take effect in Application mode
- Fixed the issue where the name and remark information were updated incorrectly when modifying the Task job
- Fixed the issue where password is required when registering data source
- Fixed the problem of incorrect heartbeat detection of cluster instances
- Fixed the problem that Jar task submission cannot use set syntax
- Fixed an issue where data development->job list cannot be folded in some cases
- Fixed the problem of repeated sending of alarm information under multi-threading
- Fixed the problem of tag height of data development->open job
- Fixed the problem that the jobmanager log of the operation and maintenance center job details could not be displayed normally in some cases
- Fixed Catalog NPE issues
- Fixed the problem of incorrect prejob task status
- Fixed add customjar syntax problem
- Fixed the problem that the jar task could not be monitored
- Fixed Token invalid exception
- Fixed a series of problems caused by statement delimiters and removed the system configuration
- Fixed the problem of task status rendering in the operation and maintenance center
- Fixed the problem of failure to delete tasks when the job instance does not exist
- Fixed duplicate exception alarm
- Fixed some issues submitted by PythonFlink
- Fixed the problem that Application Mode cannot use global variables
- Fixed the problem that K8s task could not start due to uninitialized resource type
- Fixed the pipeline acquisition error of the Jar task causing the front end to not work properly
- Fix SqlServer timestamp conversion to string
- Fixed NPE issue when publishing tasks with UDF
- Fixed the problem of Jar task being unable to obtain execution history
- Fixed the problem of front-end crash caused by NPE when Doris data source obtains DDL and queries
Optimization
- Added key width for job configuration items
- Optimize query job directory tree
- Optimize Flink on yarn app submission
- Optimize Explainer class to use builder pattern to build results
- Optimize document management
- Implement operator via SPI
- Optimize document form pop-up layer
- Optimize type rendering of Flink instances
- Optimize the data source details search box
- The method of obtaining the version is optimized to be returned by the backend interface
- Optimize CANCEL job logic, and can forcefully stop the lost connection job
- Optimize the detection reference logic when part of the registration center is deleted
- You can specify a job template when creating an optimization job
- Optimize Task deletion logic
- Optimize some front-end internationalization
- Optimize automatic switching between console and result tag during execution preview
- Optimize the UDF download logic of K8S
- Optimize the synchronization of the entire database and sub-databases and tables
- Optimize the registration center->data source list jump logic to the details page
- Optimization of job configuration logic (job configuration cannot be edited when the job has been released)
- Optimize the cluster instance rendering logic of job configuration in data development
- Optimize the heartbeat detection of Flink cluster
- Optimize the data source to obtain data anomalies without feedback to the front-end problem
- Optimize program shutdown strategy to graceful shutdown
- CDCSOURCE supports earliest offset and timestamp scanStartupMode
- Cancel the uniqueness restriction of the task table save point path
- Optimize CDCSOURCE light_schema_change from Mysql to Doris
- Optimize startup script classpath and add FLINK_HOME
- Optimize some front-end absolute paths to relative paths
- Change the default admin account password to a strong password
Document
- Improve the cluster instance list document of the registration center
- Improve the alarm documentation of the registration center
- Improve the Git project documentation of the registration center
- Modify domain name
- Improve the documentation of the registration center and certification center
- Improve contributor development documentation
- Add parameter description debezium.* to CDCSOURCE
- Modify the official website document structure
- Add some data development related documents
- Removed some deprecated/wrong documentation
- Add quick start document
- Add deployment documentation
- Optimize the documentation of sub-databases and tables
- Optimize general deployment documents
- Added documents related to alarm re-sending
- Optimize openapi documentation
- Add HDFS HA configuration document
- Optimize LDAP related documents
- Fixed some document word errors
- Fixed the version error issue in the integrated DolphinScheduler document
Security
- CVE-2023-2976
- CVE-2020-8908
Other
- Add some automated actions
1、 dinky-release-1.14-1.0.0.tar.gz 175.53MB
2、 dinky-release-1.15-1.0.0.tar.gz 174.83MB
3、 dinky-release-1.16-1.0.0.tar.gz 174.77MB
4、 dinky-release-1.17-1.0.0.tar.gz 174.77MB
5、 dinky-release-1.18-1.0.0.tar.gz 174.79MB