Skip to content

fix: handle None content.parts in google_genai#5872

Closed
thakoreh wants to merge 60 commits intogetsentry:mainfrom
thakoreh:fix/google-genai-none-parts
Closed

fix: handle None content.parts in google_genai#5872
thakoreh wants to merge 60 commits intogetsentry:mainfrom
thakoreh:fix/google-genai-none-parts

Conversation

@thakoreh
Copy link

Fixes #5854

Gemini sometimes returns candidates where content.parts exists but
is None. hasattr lets this through and we crash on iteration.

Switched to a truthiness check and added a regression test.

dingsdax and others added 30 commits March 12, 2026 16:45
…#5649)

## Summary

- Add two GitHub Agentic Workflows (`gh aw`) that auto-generate and
maintain developer-facing codebase documentation in `docs/codebase/`
- **`docs-codebase-refresh`**: full regeneration of every doc page,
triggered on merge to main/master or manual dispatch
- **`docs-codebase-update`**: incremental update of only affected pages,
triggered on merge when `sentry_sdk/**`, `MIGRATION_GUIDE.md`, or
`CHANGELOG.md` change
- Both workflows create PRs (never direct commits) for human review
- Includes a portable `BUILD_PLAN.md` designed for reuse across other
Sentry SDKs -- only the SDK Context block changes per language (will
remove later)

### How This Relates to Other Sentry Docs

- **[docs.sentry.io/platforms/\*](https://docs.sentry.io/)** tells users
*what to do* -- setup guides, config options, API usage.
- **[develop.sentry.dev/sdk/](https://develop.sentry.dev/sdk/)** tells
SDK authors *what to build* -- protocol spec, envelope format, required
behaviors.
- **API reference** (Sphinx, TypeDoc, Javadoc, etc.) tells developers
*what the API surface looks like* -- auto-generated from
docstrings/annotations. Lists signatures, parameters, return types.
- **`docs/codebase/*`** (this) explains *what was built and how it
works* -- architecture, data flow, how modules connect, and why.
Generated from full source analysis, not just docstrings. Aimed at SDK
contributors and maintainers.

### Files added

| File | Purpose |
|------|---------|
| `docs/codebase/BUILD_PLAN.md` | Portable blueprint with porting
checklist |
| `docs/codebase/_meta/style-guide.md` | SDK-agnostic formatting rules
and page templates |
| `docs/codebase/_meta/manifest.json` | Empty manifest (populated by
first workflow run) |
| `.github/workflows/docs-codebase-refresh.md` | Full refresh workflow
source |
| `.github/workflows/docs-codebase-update.md` | Incremental update
workflow source |
| `.github/workflows/docs-codebase-*.lock.yml` | Compiled Actions
workflows |
| `.gitattributes` | Marks `docs/codebase/**` as `linguist-generated` |

## Test plan

- [x] `gh aw compile` produces 0 errors, 0 warnings for both workflows
- [ ] Manual trigger via `gh aw run docs-codebase-refresh` generates
initial docs
- [ ] Verify generated pages cover all integrations in `_MIN_VERSIONS`
- [ ] Push a source change to main, verify incremental workflow updates
only affected pages
- [ ] Both workflows create PRs (not direct commits)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…sentry#5654)

## Summary

- Switch `docs-codebase-refresh` and `docs-codebase-update` GitHub
Agentic Workflows from the default Copilot engine to `engine: claude`
- Lock files recompiled via `gh aw compile`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…5655)

Remove the agentic CI workflows and generated artifacts introduced for
auto-generating codebase documentation.

This reverts the work from getsentry#5649 and getsentry#5654, which added GitHub Actions
workflows to generate and refresh docs using an AI agent. The generated
docs files (`BUILD_PLAN.md`, `_meta/manifest.json`,
`_meta/style-guide.md`) and associated action lock files are also
removed.

The `.gitattributes` file (which was added as part of that workflow
setup) is removed as well.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Add the `gen_ai.system` span attribute (set to `"anthropic"`) to the
Anthropic integration.

Other AI integrations (OpenAI, Langchain, Google GenAI, LiteLLM,
Pydantic AI) already set this attribute, but it was missing from the
Anthropic integration. The attribute is set in `_set_input_data()` which
is called for every span (streaming/non-streaming, sync/async).

Refs PY-2135
Closes getsentry#5657
Set the `gen_ai.response.id` property on spans created by the Anthropic
integration.

For non-streaming responses, the ID is read from `result.id` on the
Message object. For streaming responses, it's captured from
`event.message.id` in the `message_start` event and threaded through the
iterator to be set when the stream completes.

The `_collect_ai_data` function's return tuple is extended with the new
`response_id` field, and `_set_output_data` accepts an optional
`response_id` parameter to set on the span.

Refs PY-2137
Closes getsentry#5659
Replace mocks with `httpx` types to avoid test failures when library internals change.
Use double quotes for JSON strings.
Create dedicated functions for patching synchronous and asynchronous response iterators.
…med response (getsentry#5564)

Prepare for adding patches for `.stream()`, which iterate over `MessageStreamEvent`.

`MessageStreamEvent` is a superset of `RawMessageStreamEvent` returned in the iterator from `create(stream=True)`, but `RawMessageStreamEvent` instances are sufficient to collect the information required for AI Client Spans.
Run post-iterator steps in a finally block so the AI Client Span is finished even if the generator does not complete.
…m()` (getsentry#5565)

Patch `Messages.stream()` and `MessageStreamManager.__enter__()` to create AI Client Spans.

Re-use existing code for setting attributes on AI Client Spans based on arguments to `anthropic` functions. Adapt tests that return a synchronous response stream with `create(stream=True)`.
…stream()` (getsentry#5572)

Patch `AsyncMessages.stream()` and `AsyncMessageStreamManager.__enter__()` to create AI Client Spans.

Adapt tests that return a asynchronous response stream with `create(stream=True)`.
…#5638)

Couple things going on in this PR. Bear with me, this is probably the
most all over the place span first PR because the outgoing trace
propagation changes make mypy complain about things elsewhere in the
sdk.

### 1. Outgoing trace propagation

Support getting trace propagation information from the span with
`_get_traceparent`, `_get_baggage`, `_iter_headers`, etc. These mirror
the old `Span` class to make integrating `StreamedSpan`s with the rest
of the SDK easier (since they're used throughout), with one difference:
they're explicitly private, while the corresponding `Span` methods were
public. Added aliases to them so that we can use the private methods
everywhere.

There is definite clean up potential here once we get rid of the old
spans and we no longer have to make the streaming span interface work
with the existing helper scope methods.

### 2. Addressing cascading mypy issues

Now that we're officially allowing `StreamedSpan`s to be set on the
scope, a LOT of type hints need updating all over the SDK. In many
places, I've added explicit guards against functionality that doesn't
exist in span first mode. This should prevent folks from using the wrong
APIs in the wrong SDK mode (tracing vs. static) as well as make mypy
happy.

---------

Co-authored-by: Erica Pisani <pisani.erica@gmail.com>
We'll only support continuous profiling in span first.

Note: Span-first profiling is not yet supported server-side.
…522f241581534dfc89bd99ec3b1da4f6 to 6b1f51ec8af03e19087df452b426aa7e46d2b20a (getsentry#5669)
…try#5678)

Add `GEN_AI_RESPONSE_FINISH_REASONS` span data to the Anthropic
integration by capturing `stop_reason` from API responses.

For non-streaming responses, the `stop_reason` is read directly from the
`Message` result. For streaming responses, it's extracted from the
`MessageDeltaEvent` delta and passed through the `_collect_ai_data`
helper.

This brings the Anthropic integration in line with the OpenAI
integration's finish reason tracking.

---------

Co-authored-by: Claude <noreply@anthropic.com>
### Description
In certain scenarios, the SDK's log batcher might cause a deadlock. This
happens if it's currently flushing, and during the flush, something
emits a log that we try to capture and add to the (locked) batcher.

With this PR, we're adding a re-entry guard to the batcher, preventing
it from recursively handling log items during locked code paths like
`flush()`.

#### Issues
Closes getsentry#5681

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
### Description
<!-- What changed and why? -->

#### Issues
<!--
* resolves: getsentry#1234
* resolves: LIN-1234
-->

#### Reminders
- Please add tests to validate your changes, and lint your code using
`tox -e linters`.
- Add GH Issue ID _&_ Linear ID (if applicable)
- PR title should use [conventional
commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type)
style (`feat:`, `fix:`, `ref:`, `meta:`)
- For external contributors:
[CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md),
[Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord
community](https://discord.gg/Ww9hbqr)
sentrivana and others added 26 commits March 19, 2026 10:42
This PR makes the ASGI integration work both in legacy and span
streaming mode. Some features and attributes will be missing in span
streaming mode for now (see the Out of scope section below).

Best reviewed with whitespace ignored:
https://github.com/getsentry/sentry-python/pull/5680/changes?w=1

---

A bit of a background on migrating integrations to span first. In order
to support both legacy spans and span streaming, most integrations will
follow the same patterns:

### API

We need to use the `start_span` API from `sentry_sdk.traces` if we're in
span streaming mode (`traces_lifecycle="stream"`).

There are no transactions anymore. Top-level spans will also be started
via the `start_span` API in span streaming mode.

### Setting data on spans

If an integration sets data on a span (via `span.set_data`,
`span.set_tag` etc.), it should use `span.set_attribute` when span
streaming is enabled.

The attributes that we set need to be in Sentry conventions. This is
deliberately not the case for most quick ports of integrations like this
one and will follow in [a future
step](getsentry#5152).

### Trace propagation

If an integration sits at a service boundary and is capable of
propagating incoming trace information (like WSGI/ASGI or Celery), in
span first mode we need to switch from the old style `with
continue_trace(...) as transaction:` to the [new style
`continue_trace()` and
`new_trace()`](https://sentry-docs-git-ivana-span-first-migration-guide.sentry.dev/platforms/python/migration/span-first/#trace-propagation)
(not context managers).

### `start_span` arguments

You can pass things like `op`, `origin`, `source` to the old
`start_span` API. With the new API, this is no longer possible, and the
individual properties need to be set as attributes directly.

### Out of scope

For now, the following is out of scope and will follow in the future:
- Making sure all attributes are correct and that they're set in Sentry
conventions: getsentry#5152
- Migrating event processors (as streaming spans are not events, event
processors are not run on them, meaning some data will not be set yet):
getsentry#5152
…etsentry#5683)

Consolidate span finishing logic in a new `_StreamSpanContext` context manager.
Forward exception info from `_StreamSpanContext.__exit__()` to `Span.__exit__()`.
Update our test matrix with new releases of integrated frameworks and
libraries.

## How it works
- Scan PyPI for all supported releases of all frameworks we have a
dedicated test suite for.
- Pick a representative sample of releases to run our test suite
against. We always test the latest and oldest supported version.
- Update
[tox.ini](https://github.com/getsentry/sentry-python/blob/master/tox.ini)
with the new releases.

## Action required
- If CI passes on this PR, it's safe to approve and merge. It means our
integrations can handle new versions of frameworks that got pulled in.
- If CI doesn't pass on this PR, this points to an incompatibility of
either our integration or our test setup with a new version of a
framework.
- Check what the failures look like and either fix them, or update the
[test
config](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/config.py)
and rerun
[scripts/generate-test-files.sh](https://github.com/getsentry/sentry-python/blob/master/scripts/generate-test-files.sh).
See
[scripts/populate_tox/README.md](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/README.md)
for what configuration options are available.

 _____________________

_🤖 This PR was automatically created using [a GitHub
action](https://github.com/getsentry/sentry-python/blob/master/.github/workflows/update-tox.yml)._

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ivana Kellyer <ivana.kellyer@sentry.io>
Reusing the `toxgen/update` branch can get annoying, especially since it
needs manual babysitting (empty commit to trigger CI). Let's use
distinct names instead.

## Summary
- Instead of always reusing the same `toxgen/update` branch, the "Update
test matrix" workflow now creates a date-based branch (e.g.
`toxgen/update-03-19`).
- Same-day reruns force-push to the same branch, so they overwrite
rather than fail.
- The PR-closing logic now finds and closes all open PRs from any
`toxgen/` branch, not just the current one.

## Test plan
- [ ] Trigger the "Update test matrix" workflow manually and verify it
creates a branch like `toxgen/update-MM-DD`
- [ ] Trigger it again on the same day and verify it overwrites the
branch and closes the previous PR
- [ ] Verify old toxgen PRs from different branches get closed

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…#5706)

## Summary
- For every auto-generated integration, adds a tox environment that
aliases the highest tested version (e.g. `tox -e py3.14-httpx-latest` =
`tox -e py3.14-httpx-v0.28.1`)
- Makes it easy to run tests against the latest version without looking
up exact version strings

## Notes
- The new latest env is NOT run in CI
- The new latest env points to the highest non-prerelease env

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…sentry#5714)

Add the experimental `suppress_asgi_chained_exceptions` option. The option defaults to `True` when unset, preserving the current behavior.
## Summary

- Pin all GitHub Actions references in `.github/` workflow files to
full-length commit SHAs

Generated by `devenv pin_gha`.

🤖 Generated with [Claude Code](https://claude.com/claude-code)
…etsentry#5796)

Update LangChain integration to use the new `gen_ai.generate_text`
operation for LLM call spans, aligning with OpenTelemetry semantic
conventions.

**Changes:**
- Changed LLM span operation from `gen_ai.pipeline` to
`gen_ai.generate_text`
- Updated span naming to include the model identifier: `generate_text
{model}` instead of generic "Langchain LLM call"
- Removed unnecessary docstring from callback method
- Updated test assertions to validate the new operation and naming
convention

This change improves observability by using more specific operation
types that accurately reflect the semantic nature of LLM generation
calls, and includes the model identifier for better span context.

Related to SDK-669. Replaces some of the changes introduced in getsentry#5705
(this is being broken down into 2 parts due to upcoming changes in the
langchain test suite)

Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>
Replace mocks with `httpx` types to avoid test failures when library internals change.
…5726)

Add test for LangChain v1.0 functionality using sample Responses API output reused from `openai-agents` tests.
Add test for LangChain v1.0 functionality using sample Responses API output with a tool call request reused from `openai-agents` tests.
Replace test with manual hook calls with a test that uses the library in a way that the same hooks are invoked.
Normalize multiline SQL query whitespace in the asyncpg integration so
that span descriptions contain single-line queries with collapsed
whitespace.

asyncpg passes raw multiline SQL strings as span descriptions. This
makes it difficult for users to match queries in
`before_send_transaction` callbacks — they'd need to account for
newlines and varying indentation instead of writing simple substring
checks like `"SELECT id, name FROM users" in desc`.

Fixes PY-2255 and getsentry#5850

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a GitHub Actions workflow that automatically converts non-draft PRs
to draft on open/reopen, and document the draft PR requirement in
CONTRIBUTING.md.

**Motivation:** Our [code submission
standard](https://develop.sentry.dev/sdk/getting-started/standards/code-submission/#pull-requests)
says PRs must start as drafts, but this wasn't enforced. Non-draft PRs
trigger review notifications prematurely and signal readiness when the
author may not be done. This workflow nudges contributors to open as
draft by automatically converting non-draft PRs and posting a comment
explaining the policy.

**What's included:**
- `.github/workflows/enforce-draft-pr.yml` — triggers on
`pull_request_target` (opened, reopened), uses the GraphQL
`convertPullRequestToDraft` mutation. Includes error handling and
comment deduplication on reopen.
- `CONTRIBUTING.md` — new "Pull Requests" section documenting the draft
requirement and linking to the code submission standard.

**Limitations:** GitHub has no pre-creation hook, so the initial CI run
and review notifications still fire before the conversion. The value is
in training contributors and ensuring draft state before a maintainer
picks up the PR.

No exemptions — applies to everyone (maintainers, internal, external
contributors).

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
hasattr passes when content.parts is None, but iterating over it
crashes with TypeError. Switch to a truthiness check instead.

Fixes getsentry#5854
@thakoreh thakoreh requested a review from a team as a code owner March 25, 2026 18:30
@github-actions
Copy link
Contributor

Semver Impact of This PR

🟢 Patch (bug fixes)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


Bug Fixes 🐛

  • Handle None content.parts in google_genai by thakoreh in #5872

🤖 This preview updates automatically when you update the PR.

@alexander-alderman-webb
Copy link
Contributor

Resolved by bcfb788

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants