Skip to content

Chore/update lock file#30

Open
warren-davies4 wants to merge 5 commits intoNHSDigital:mainfrom
nhsengland:chore/update-lock-file
Open

Chore/update lock file#30
warren-davies4 wants to merge 5 commits intoNHSDigital:mainfrom
nhsengland:chore/update-lock-file

Conversation

@warren-davies4
Copy link
Collaborator

No description provided.

dependabot bot and others added 5 commits February 20, 2026 14:01
Bumps [pyspark](https://github.com/apache/spark) from 3.2.1 to 3.3.2.
- [Commits](apache/spark@v3.2.1...v3.3.2)

---
updated-dependencies:
- dependency-name: pyspark
  dependency-version: 3.3.2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Bumps [numpy](https://github.com/numpy/numpy) from 1.21.5 to 1.22.0.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst)
- [Commits](numpy/numpy@v1.21.5...v1.22.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-version: 1.22.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* added move_attributes_to_new_dimension()

* added get_dimension_list_from_col()

* added create_dimensions_cohort_table(), changed config to yaml

* added test_dimension_cohorts.py

* moved get_dimension_list_from_col to processing/dimension_cohorts

* added rename_cols() and processing func registry/call loop

* added conftest.py

* added replace_col_values()

* updated config for move_attributes_to_new_dimension

* added extra test case for test_replace_col_values()

* added concat_cols()

* added create_md5_hash_col

* added create_md5_hash_col

* added create_uuid_col

* added cast_date_col_to_timestamp

* fix cast_date_col_to_timestamp by adding format param

* added DataFrame.drop() function wrapper

* added dimensions and metric schemas

* fixed typo, metrics to metric

* added validation and schemas

* added add_lit_col

* switched get_config() to yaml

* removed some unneeded comments

* refactor: add save_df_as_named_csv wrapper and fix void column CSV write error

- Added save_df_as_named_csv to write_csv.py to wrap save and rename steps
- Cast void-typed columns to StringType before writing to avoid PySpark AnalysisException
- Replaced duplicate csv save/rename blocks in create_publication.py with new wrapper

* feat: add dimensions_to_exclude support to create_dimension_table

* refactor: make dimension and attribute col names configurable in create_dimension_table

* chore: bump pyspark to 3.2.2

* chore: remove unused dependencies, add pyyaml

* removed unused files

* renamed example files

* replace example pipeline with maternity services pipeline using cml packages

* update project name, dependencies, and add test-pypi sources

* remove redundant requirements.txt in favour of poetry.lock

* added cml_schemas

* renamed package to msds-monthly-to-cml

* rename package from msds-monthly-to-cml to msds_monthly_to_cml

* add logging, update config keys, and ignore log files

* address code review feedback

* update devcontainer to use poetry and python 3.11

Co-authored-by: Claude <claude@anthropic.com>

* add project README

Co-authored-by: Claude <claude@anthropic.com>

---------

Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant