refactor: rewrite in rust
All checks were successful
Continuous Integration / Lint, Check & Test (push) Successful in 1m38s
Continuous Integration / Build Package (push) Successful in 1m54s

This commit is contained in:
Konstantin Fickel 2026-03-29 18:19:15 +02:00
parent 20a3e8b437
commit ed493cff29
Signed by: kfickel
GPG key ID: A793722F9933C1A5
72 changed files with 5684 additions and 3688 deletions

4
.envrc
View file

@ -1,3 +1 @@
#!/usr/bin/env bash
eval "$(devenv direnvrc)"
use devenv
use flake

198
.gitignore vendored
View file

@ -1,184 +1,24 @@
# Created by https://www.toptal.com/developers/gitignore/api/python
# Edit at https://www.toptal.com/developers/gitignore?templates=python
# Rust build artifacts
/target/
### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# IDE
.idea/
.vscode/
*.swp
*.swo
*~
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
### Python Patch ###
# Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
poetry.toml
# ruff
.ruff_cache/
# LSP config files
pyrightconfig.json
# End of https://www.toptal.com/developers/gitignore/api/python
# OS
.DS_Store
Thumbs.db
# Nix
.direnv
test-report.xml
.devenv
.devenv.flake.nix
.pre-commit-config.yaml
result
result-*
# Test artifacts
*.profraw
*.profdata
.pre-commit-config.yaml

View file

@ -1,11 +0,0 @@
repos:
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.7.13
hooks:
- id: uv-lock
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.12.0
hooks:
- id: ruff
args: [ --fix ]
- id: ruff-format

View file

@ -1 +0,0 @@
3.13

35
CLAUDE.md Normal file
View file

@ -0,0 +1,35 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Build & Test Commands
```bash
nix develop # Enter dev shell with Rust toolchain
nix build # Build the package
nix flake check # Run all checks (clippy, fmt, tests, pre-commit)
# Inside nix develop:
cargo test # Run all tests
cargo test test_name # Run a specific test
cargo clippy # Lint
cargo fmt # Format
```
## Architecture
Streamd parses markdown files into hierarchical **Shards**, then **localizes** them by assigning temporal moments and dimensional placements based on `@Tag` markers.
**Data flow:** Markdown → `extract::parser``Shard` tree → `localize::shard``LocalizedShard` tree
**Key modules:**
- `models/` — Core types: `Shard`, `LocalizedShard`, `Dimension`, `Marker`, `Timecard`
- `extract/` — Tag extraction (`tag_extraction.rs`) and markdown parsing (`parser.rs`)
- `localize/` — DateTime extraction, configuration merging, shard localization
- `timesheet/` — State machine that converts localized shards into timecards
- `query/` — Recursive search functions for finding shards by predicate
- `cli/` — Clap-based CLI commands
## Requirements
`REQUIREMENTS.md` contains the formal specification. Update it along with the `README.md` whenever implementing or changing features.

1299
Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

40
Cargo.toml Normal file
View file

@ -0,0 +1,40 @@
[package]
name = "streamd"
version = "0.1.0"
edition = "2021"
description = "Personal knowledge management and time-tracking CLI using @Tag annotations"
license = "AGPL-3.0-only"
authors = ["Konstantin Fickel"]
repository = "https://github.com/konstantinfickel/streamd"
[dependencies]
clap = { version = "4", features = ["derive", "env"] }
clap_complete = "4"
serde = { version = "1", features = ["derive"] }
serde_yaml = "0.9"
thiserror = "2"
miette = { version = "7", features = ["fancy"] }
pulldown-cmark = "0.12"
regex = "1"
once_cell = "1"
chrono = { version = "0.4", features = ["serde"] }
walkdir = "2"
indexmap = { version = "2", features = ["serde"] }
itertools = "0.13"
directories = "5"
[dev-dependencies]
pretty_assertions = "1"
tempfile = "3"
[[bin]]
name = "streamd"
path = "src/main.rs"
[lib]
name = "streamd"
path = "src/lib.rs"
[profile.release]
lto = true
strip = true

375
REQUIREMENTS.md Normal file
View file

@ -0,0 +1,375 @@
# Streamd Requirements
Streamd (stylized as "Strea.md") is a personal knowledge management and time-tracking CLI tool that organizes time-ordered markdown files using `@Tag` annotations.
## Core Concepts
### Shard
A **Shard** is the fundamental unit of content. It represents a section of a markdown file (paragraph, heading, list item) that can contain markers and tags.
```
Shard {
markers: [String] // @Tag annotations at START of content
tags: [String] // @Tag annotations AFTER content begins
start_line: int
end_line: int
children: [Shard] // Nested shards (hierarchical)
}
```
### LocalizedShard
A **LocalizedShard** extends Shard with temporal and dimensional placement information.
```
LocalizedShard {
markers: [String]
tags: [String]
start_line: int
end_line: int
moment: DateTime // When this entry was created
location: Map<String, String> // Dimension placements
children: [LocalizedShard]
}
```
---
## Tag Extraction Logic
### R1: Tag Recognition Pattern
Tags are recognized by the regex pattern: `@([^\s*\x60~\[\]]+)`
A tag is `@` followed by word characters, excluding:
- Whitespace
- Asterisks `*`
- Backticks `` ` ``
- Tildes `~`
- Brackets `[]`
**Examples of valid tags:**
- `@Task`, `@Done`, `@Waiting`
- `@Timesheet`, `@Break`
- `@ProjectName`, `@Client-ABC`
### R2: Marker vs Tag Distinction
The extraction MUST distinguish between **markers** and **tags** based on their position within a block:
| Type | Position | Purpose |
|------|----------|---------|
| **Marker** | Before any non-whitespace content | Semantic classification (triggers shard creation) |
| **Tag** | After non-whitespace content | Metadata annotation (does not trigger shard creation) |
**Example:**
```markdown
@Task @Streamd Working on feature <!-- @Task and @Streamd are MARKERS -->
Some text here @CompletedFeature <!-- @CompletedFeature is a TAG -->
```
### R3: Marker Boundary Tracking
The extraction algorithm MUST track a "marker boundary" state:
1. Start with `marker_boundary_encountered = false`
2. While processing tokens:
- If whitespace-only: continue (boundary not crossed)
- If `@Tag` token found AND boundary NOT crossed: add to markers
- If `@Tag` token found AND boundary crossed: add to tags
- If any non-whitespace content found: set boundary = crossed
### R4: Nested Token Handling
Tag extraction MUST handle nested markdown formatting:
- Emphasis: `*@Tag*` or `_@Tag_`
- Strong: `**@Tag**` or `__@Tag__`
- Strikethrough: `~~@Tag~~`
- Links: `[@Tag](url)`
Tags inside these formatting elements are still valid and should be extracted.
### R5: Applicable Block Types
Tag extraction applies to:
- Headings (`# Heading with @Tag`)
- Paragraphs (`@Tag in paragraph`)
- Quoute Blocks (`> @Tag in Quote`)
- List items (each item can have its own markers)
---
## Parsing Logic
### R6: Heading-Based Hierarchy
The parser MUST create a hierarchical shard structure based on markdown headings.
**Algorithm for determining split level:**
1. Find the minimum heading level that either:
- Appears 2+ times in the block list, OR
- Has markers AND is not the first heading
2. If no such level exists, do not split (return None)
**Example:**
```markdown
# Main Title
Content here
## Section A <!-- Split point (level 2 appears twice) -->
Section A content
## Section B <!-- Split point -->
Section B content
```
### R7: List Item Shard Creation
Each list item with markers MUST become its own shard:
```markdown
- @Task Item one <!-- Shard 1 -->
- @Task Item two <!-- Shard 2 -->
- Item three <!-- NOT a separate shard (no markers) -->
```
### R8: Shard Simplification
When building shards, apply this optimization:
- If a shard has exactly 1 child AND no markers AND no tags
- Return the child directly instead of wrapping it
---
## Dimension Placement Logic
### R9: Dimension Configuration
A **Dimension** defines a classification axis:
```
Dimension {
display_name: String // For UI display
comment: String? // Documentation
propagate: bool // Whether children inherit this dimension
}
```
### R10: Marker Configuration
A **Marker** defines how a tag affects dimension placement:
```
Marker {
display_name: String
placements: [MarkerPlacement]
}
MarkerPlacement {
if_with: Set<String> // Conditional: only apply if ALL these markers present
dimension: String // Target dimension name
value: String? // Value to assign (defaults to marker name)
overwrites: bool // Can overwrite existing placement
}
```
### R11: Conditional Placement
Placements with `if_with` conditions MUST only apply when ALL specified markers are present on the same shard.
**Example Configuration:**
```
Marker "Task" {
placements: [
{ dimension: "task", value: "open" },
{ if_with: ["Done"], dimension: "task", value: "done" },
{ if_with: ["Waiting"], dimension: "task", value: "waiting" },
]
}
```
**Behavior:**
- `@Task` alone → `task: "open"`
- `@Task @Done``task: "done"` (conditional overrides default)
- `@Task @Waiting``task: "waiting"`
### R12: Localization Algorithm
The localization process MUST follow this algorithm:
```
function localize_shard(shard, config, propagated_from_parent, moment):
position = copy(propagated_from_parent) // Start with inherited
private_position = {} // Non-propagating dimensions
for marker in shard.markers:
if marker in config.markers:
for placement in marker.placements:
// Check conditional
if placement.if_with is subset of shard.markers:
dimension = config.dimensions[placement.dimension]
value = placement.value OR marker
// Check if we can apply this placement
target = dimension.propagate ? position : private_position
if placement.dimension not in target OR placement.overwrites:
target[placement.dimension] = value
// Recursively localize children with propagating dimensions
children = [
localize_shard(child, config, position, moment)
for child in shard.children
]
// Merge private dimensions into final position
position.update(private_position)
return LocalizedShard(
markers: shard.markers,
tags: shard.tags,
location: position,
moment: moment,
children: children,
)
```
### R13: Dimension Propagation
When `propagate = true`:
- Children inherit the dimension value from their parent
- Child can override with their own placement
When `propagate = false`:
- Dimension value is NOT inherited by children
- Each shard must have its own marker to be placed in this dimension
**Example:**
```
dimensions: {
"project": { propagate: true }, // Children inherit project
"task": { propagate: false }, // Each task is independent
}
```
```markdown
# @Project-X
## @Task Item A <!-- project: "Project-X", task: "open" -->
### Sub-item <!-- project: "Project-X", task: (none) -->
## @Task Item B <!-- project: "Project-X", task: "open" -->
```
### R14: Overwrite Behavior
Default: A placement does NOT overwrite an existing value in the dimension.
With `overwrites: true`: The placement WILL replace any existing value.
This allows conditional placements to override base placements.
---
## File Naming Convention
### R15: File Name Format
Files follow the pattern: `YYYYMMDD-HHMMSS [markers].md`
- `YYYYMMDD`: Date (8 digits, required)
- `HHMMSS`: Time (4-6 digits, optional, pads with zeros)
- `[markers]`: Space-separated marker names extracted from file content
**Extraction regex:** `^(?P<date>\d{8})(?:-(?P<time>\d{4,6}))?.+\.md$`
### R16: Temporal Markers
Special markers can override the file timestamp:
- Date markers: `@YYYYMMDD` (8 digits)
- Time markers: `@HHMMSS` (6 digits)
These are used for entries that reference a different time than when the file was created.
---
## Timesheet Module
### R17: Timesheet Point Types
```
TimesheetPointType {
Card, // Clock in / start work
Break, // Clock out / end work
SickLeave,
Vacation,
Holiday,
Undertime,
}
```
### R18: Timesheet State Machine
Process timesheet shards chronologically per day:
1. Start state: "on break" (not working)
2. `Card` marker: Transition to "working", record start time
3. `Break` marker: Transition to "on break", emit timecard from previous start to now
4. Special markers (SickLeave, Vacation, etc.): Set day type
**Validation:** The last entry of each day MUST be a `Break` (cannot end day while working).
---
## Query System
### R19: Shard Search
Provide recursive search through the shard tree:
- `find_shard(predicate)`: Find all shards matching a predicate function
- `find_by_position(dimension, value)`: Find shards where `location[dimension] == value`
- `find_by_set_dimension(dimension)`: Find shards where dimension exists in location
---
## CLI Commands
### R20: Core Commands
| Command | Description |
|---------|-------------|
| `streamd new` | Create new timestamped file, open editor, rename with markers on close |
| `streamd todo` | List all shards with `task: "open"` |
| `streamd edit [n]` | Edit nth file (supports negative indexing for recent files) |
| `streamd timesheet` | Extract and export timesheet data as CSV |
| `streamd completions <shell>` | Generate shell completions (bash, zsh, fish, elvish, powershell) |
---
## Application Configuration
### R22: Config File Location
The application configuration is stored at `~/.config/streamd/config.yaml`:
```yaml
base_folder: /path/to/stream/files
```
### R23: Environment Variable Override
The `STREAMD_BASE_FOLDER` environment variable can override the config file setting.
---
## Configuration Merging
### R24: Configuration Composition
Multiple configurations can be merged:
- Dimensions are combined (later configs can add new dimensions)
- Markers are combined (later configs can add new markers)
- This allows base configuration + domain-specific extensions

View file

@ -1,123 +0,0 @@
{
"nodes": {
"devenv": {
"locked": {
"dir": "src/modules",
"lastModified": 1771157881,
"owner": "cachix",
"repo": "devenv",
"rev": "b0b3dfa70ec90fa49f672e579f186faf4f61bd4b",
"type": "github"
},
"original": {
"dir": "src/modules",
"owner": "cachix",
"repo": "devenv",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1767039857,
"owner": "NixOS",
"repo": "flake-compat",
"rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "flake-compat",
"type": "github"
}
},
"git-hooks": {
"inputs": {
"flake-compat": "flake-compat",
"gitignore": "gitignore",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1770726378,
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "5eaaedde414f6eb1aea8b8525c466dc37bba95ae",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "git-hooks.nix",
"type": "github"
}
},
"gitignore": {
"inputs": {
"nixpkgs": [
"git-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1762808025,
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "cb5e3fdca1de58ccbc3ef53de65bd372b48f567c",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"nixpkgs": {
"inputs": {
"nixpkgs-src": "nixpkgs-src"
},
"locked": {
"lastModified": 1770434727,
"owner": "cachix",
"repo": "devenv-nixpkgs",
"rev": "8430f16a39c27bdeef236f1eeb56f0b51b33d348",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "rolling",
"repo": "devenv-nixpkgs",
"type": "github"
}
},
"nixpkgs-src": {
"flake": false,
"locked": {
"lastModified": 1769922788,
"narHash": "sha256-H3AfG4ObMDTkTJYkd8cz1/RbY9LatN5Mk4UF48VuSXc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "207d15f1a6603226e1e223dc79ac29c7846da32e",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"devenv": "devenv",
"git-hooks": "git-hooks",
"nixpkgs": "nixpkgs",
"pre-commit-hooks": [
"git-hooks"
]
}
}
},
"root": "root",
"version": 7
}

View file

@ -1,28 +0,0 @@
{
pkgs,
...
}:
{
languages = {
python = {
enable = true;
uv.enable = true;
};
};
packages = [
pkgs.basedpyright
];
git-hooks.hooks = {
basedpyright = {
enable = true;
entry = "${pkgs.basedpyright}/bin/basedpyright";
files = "\\.py$";
types = [ "file" ];
};
ruff.enable = true;
ruff-format.enable = true;
commitizen.enable = true;
};
}

View file

@ -1,6 +0,0 @@
inputs:
git-hooks:
url: github:cachix/git-hooks.nix
inputs:
nixpkgs:
follows: nixpkgs

97
flake.lock generated
View file

@ -1,5 +1,20 @@
{
"nodes": {
"crane": {
"locked": {
"lastModified": 1774313767,
"narHash": "sha256-hy0XTQND6avzGEUFrJtYBBpFa/POiiaGBr2vpU6Y9tY=",
"owner": "ipetkov",
"repo": "crane",
"rev": "3d9df76e29656c679c744968b17fbaf28f0e923d",
"type": "github"
},
"original": {
"owner": "ipetkov",
"repo": "crane",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
@ -25,11 +40,11 @@
]
},
"locked": {
"lastModified": 1770726378,
"narHash": "sha256-kck+vIbGOaM/dHea7aTBxdFYpeUl/jHOy5W3eyRvVx8=",
"lastModified": 1774104215,
"narHash": "sha256-EAtviqz0sEAxdHS4crqu7JGR5oI3BwaqG0mw7CmXkO8=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "5eaaedde414f6eb1aea8b8525c466dc37bba95ae",
"rev": "f799ae951fde0627157f40aec28dec27b22076d0",
"type": "github"
},
"original": {
@ -61,11 +76,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1771008912,
"narHash": "sha256-gf2AmWVTs8lEq7z/3ZAsgnZDhWIckkb+ZnAo5RzSxJg=",
"lastModified": 1774386573,
"narHash": "sha256-4hAV26quOxdC6iyG7kYaZcM3VOskcPUrdCQd/nx8obc=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "a82ccc39b39b621151d6732718e3e250109076fa",
"rev": "46db2e09e1d3f113a13c0d7b81e2f221c63b8ce9",
"type": "github"
},
"original": {
@ -75,81 +90,31 @@
"type": "github"
}
},
"pyproject-build-systems": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"pyproject-nix": [
"pyproject-nix"
],
"uv2nix": [
"uv2nix"
]
},
"locked": {
"lastModified": 1771039651,
"narHash": "sha256-WZOfX4APbc6vmL14ZWJXgBeRfEER8H+OIX0D0nSmv0M=",
"owner": "pyproject-nix",
"repo": "build-system-pkgs",
"rev": "69bc2b53b79cbd6ce9f66f506fc962b45b5e68b9",
"type": "github"
},
"original": {
"owner": "pyproject-nix",
"repo": "build-system-pkgs",
"type": "github"
}
},
"pyproject-nix": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1769936401,
"narHash": "sha256-kwCOegKLZJM9v/e/7cqwg1p/YjjTAukKPqmxKnAZRgA=",
"owner": "pyproject-nix",
"repo": "pyproject.nix",
"rev": "b0d513eeeebed6d45b4f2e874f9afba2021f7812",
"type": "github"
},
"original": {
"owner": "pyproject-nix",
"repo": "pyproject.nix",
"type": "github"
}
},
"root": {
"inputs": {
"crane": "crane",
"git-hooks": "git-hooks",
"nixpkgs": "nixpkgs",
"pyproject-build-systems": "pyproject-build-systems",
"pyproject-nix": "pyproject-nix",
"uv2nix": "uv2nix"
"rust-overlay": "rust-overlay"
}
},
"uv2nix": {
"rust-overlay": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"pyproject-nix": [
"pyproject-nix"
]
},
"locked": {
"lastModified": 1770770348,
"narHash": "sha256-A2GzkmzdYvdgmMEu5yxW+xhossP+txrYb7RuzRaqhlg=",
"owner": "pyproject-nix",
"repo": "uv2nix",
"rev": "5d1b2cb4fe3158043fbafbbe2e46238abbc954b0",
"lastModified": 1774753967,
"narHash": "sha256-HpT5fE8JQSbAxolUnw3VgGAo3urVjcrgtB2rtoxURVw=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "405b9b4c2c6c5a2b1d390524ce8a240729f34a96",
"type": "github"
},
"original": {
"owner": "pyproject-nix",
"repo": "uv2nix",
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
}

197
flake.nix
View file

@ -4,23 +4,12 @@
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
pyproject-nix = {
url = "github:pyproject-nix/pyproject.nix";
rust-overlay = {
url = "github:oxalica/rust-overlay";
inputs.nixpkgs.follows = "nixpkgs";
};
uv2nix = {
url = "github:pyproject-nix/uv2nix";
inputs.pyproject-nix.follows = "pyproject-nix";
inputs.nixpkgs.follows = "nixpkgs";
};
pyproject-build-systems = {
url = "github:pyproject-nix/build-system-pkgs";
inputs.pyproject-nix.follows = "pyproject-nix";
inputs.uv2nix.follows = "uv2nix";
inputs.nixpkgs.follows = "nixpkgs";
};
crane.url = "github:ipetkov/crane";
git-hooks = {
url = "github:cachix/git-hooks.nix";
@ -32,9 +21,8 @@
{
self,
nixpkgs,
pyproject-nix,
uv2nix,
pyproject-build-systems,
rust-overlay,
crane,
git-hooks,
...
}:
@ -42,110 +30,87 @@
inherit (nixpkgs) lib;
forAllSystems = lib.genAttrs lib.systems.flakeExposed;
workspace = uv2nix.lib.workspace.loadWorkspace { workspaceRoot = ./.; };
mkPkgs =
system:
import nixpkgs {
inherit system;
overlays = [ rust-overlay.overlays.default ];
};
overlay = workspace.mkPyprojectOverlay {
sourcePreference = "wheel";
};
editableOverlay = workspace.mkEditablePyprojectOverlay {
root = "$REPO_ROOT";
};
pythonSets = forAllSystems (
mkCraneLib =
system:
let
pkgs = nixpkgs.legacyPackages.${system};
inherit (pkgs) stdenv;
baseSet = pkgs.callPackage pyproject-nix.build.packages {
python = pkgs.python313;
pkgs = mkPkgs system;
toolchain = pkgs.rust-bin.stable.latest.default.override {
extensions = [
"rust-src"
"rust-analyzer"
];
};
pyprojectOverrides = _final: prev: {
streamd = prev.streamd.overrideAttrs (old: {
passthru = old.passthru // {
tests = (old.passthru.tests or { }) // {
pytest = stdenv.mkDerivation {
name = "${_final.streamd.name}-pytest";
inherit (_final.streamd) src;
nativeBuildInputs = [
(_final.mkVirtualEnv "streamd-pytest-env" {
streamd = [ "dev" ];
})
];
dontConfigure = true;
buildPhase = ''
runHook preBuild
# Exit code 5 means no tests collected — allow it so the
# check succeeds on an empty test suite.
pytest || [ $? -eq 5 ]
runHook postBuild
'';
installPhase = ''
runHook preInstall
touch $out
runHook postInstall
'';
};
};
};
});
};
in
baseSet.overrideScope (
lib.composeManyExtensions [
pyproject-build-systems.overlays.default
overlay
pyprojectOverrides
]
)
);
(crane.mkLib pkgs).overrideToolchain toolchain;
mkStreamd =
system:
let
craneLib = mkCraneLib system;
commonArgs = {
src = craneLib.path ./.;
pname = "streamd";
version = "0.1.0";
strictDeps = true;
};
cargoArtifacts = craneLib.buildDepsOnly commonArgs;
in
craneLib.buildPackage (
commonArgs
// {
inherit cargoArtifacts;
passthru = {
inherit cargoArtifacts;
tests = {
clippy = craneLib.cargoClippy (
commonArgs
// {
inherit cargoArtifacts;
cargoClippyExtraArgs = "--all-targets -- -D warnings";
}
);
fmt = craneLib.cargoFmt { src = commonArgs.src; };
test = craneLib.cargoTest (commonArgs // { inherit cargoArtifacts; });
};
};
}
);
mkGitHooksCheck =
system:
let
pkgs = nixpkgs.legacyPackages.${system};
pythonSet = pythonSets.${system};
venv = pythonSet.mkVirtualEnv "streamd-check-env" workspace.deps.all;
pkgs = mkPkgs system;
toolchain = pkgs.rust-bin.stable.latest.default;
in
git-hooks.lib.${system}.run {
src = ./.;
hooks = {
basedpyright = {
rustfmt = {
enable = true;
entry = "${pkgs.basedpyright}/bin/basedpyright --pythonpath ${venv}/bin/python --project ${
pkgs.writeText "pyrightconfig.json" (
builtins.toJSON {
reportMissingTypeStubs = false;
reportUnnecessaryTypeIgnoreComment = false;
}
)
}";
files = "\\.py$";
types = [ "file" ];
package = toolchain;
};
ruff.enable = true;
ruff-format.enable = true;
commitizen.enable = true;
};
};
in
{
packages = forAllSystems (
system:
let
pythonSet = pythonSets.${system};
pkgs = nixpkgs.legacyPackages.${system};
inherit (pkgs.callPackages pyproject-nix.build.util { }) mkApplication;
streamd = mkStreamd system;
in
rec {
streamd = mkApplication {
venv = pythonSet.mkVirtualEnv "streamd-env" workspace.deps.default;
package = pythonSet.streamd;
};
{
inherit streamd;
default = streamd;
}
);
@ -171,8 +136,8 @@
package = lib.mkOption {
type = lib.types.package;
default = self.packages.${pkgs.system}.streamd;
defaultText = lib.literalExpression "inputs.streamd.packages.\${pkgs.system}.streamd";
default = self.packages.${pkgs.stdenv.hostPlatform.system}.streamd;
defaultText = lib.literalExpression "inputs.streamd.packages.\${pkgs.stdenv.hostPlatform.system}.streamd";
description = "The package to use for the streamd binary.";
};
};
@ -191,16 +156,24 @@
};
home.shellAliases.s = "streamd";
programs.bash.initExtra = ''
eval "$(${cfg.package}/bin/streamd completions bash)"
'';
programs.zsh.initExtra = ''
eval "$(${cfg.package}/bin/streamd completions zsh)"
'';
};
};
checks = forAllSystems (
system:
let
pythonSet = pythonSets.${system};
streamd = mkStreamd system;
in
{
inherit (pythonSet.streamd.passthru.tests) pytest;
inherit (streamd.passthru.tests) clippy fmt test;
pre-commit = mkGitHooksCheck system;
}
);
@ -208,24 +181,22 @@
devShells = forAllSystems (
system:
let
pkgs = nixpkgs.legacyPackages.${system};
pythonSet = pythonSets.${system}.overrideScope editableOverlay;
virtualenv = pythonSet.mkVirtualEnv "streamd-dev-env" workspace.deps.all;
pkgs = mkPkgs system;
toolchain = pkgs.rust-bin.stable.latest.default.override {
extensions = [
"rust-src"
"rust-analyzer"
];
};
in
{
default = pkgs.mkShell {
packages = [
virtualenv
pkgs.uv
toolchain
pkgs.commitizen
];
env = {
UV_NO_SYNC = "1";
UV_PYTHON = pythonSet.python.interpreter;
UV_PYTHON_DOWNLOADS = "never";
};
shellHook = ''
unset PYTHONPATH
export REPO_ROOT=$(git rev-parse --show-toplevel)
${(mkGitHooksCheck system).shellHook}
'';
};

View file

@ -1,30 +0,0 @@
[project]
name = "streamd"
version = "0.1.0"
description = "Searching for tags in streams"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"click==8.3.1",
"mistletoe==1.5.1",
"pydantic==2.12.5",
"pydantic-settings[yaml]==2.12.0",
"rich==14.3.2",
"typer==0.23.1",
"xdg-base-dirs==6.0.2",
]
[project.scripts]
streamd = "streamd:app"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[dependency-groups]
dev = [
"basedpyright==1.38.0",
"faker==40.4.0",
"pytest==9.0.2",
"ruff==0.15.1",
]

40
src/cli/args.rs Normal file
View file

@ -0,0 +1,40 @@
use clap::{Parser, Subcommand};
use clap_complete::Shell;
#[derive(Parser)]
#[command(name = "streamd")]
#[command(
author,
version,
about = "Personal knowledge management and time-tracking CLI using @Tag annotations"
)]
pub struct Cli {
#[command(subcommand)]
pub command: Option<Commands>,
}
#[derive(Subcommand)]
pub enum Commands {
/// Create a new stream file
New,
/// Display open tasks
Todo,
/// Edit a stream file by position
Edit {
/// Position of the file to edit (0 = most recent, negative = from oldest)
#[arg(default_value = "1")]
number: i32,
},
/// Display extracted timesheets
Timesheet,
/// Generate shell completions
Completions {
/// Shell to generate completions for
#[arg(value_enum)]
shell: Shell,
},
}

View file

@ -0,0 +1,11 @@
use clap::CommandFactory;
use clap_complete::{generate, Shell};
use std::io;
use crate::cli::Cli;
pub fn run(shell: Shell) {
let mut cmd = Cli::command();
let name = cmd.get_name().to_string();
generate(shell, &mut cmd, name, &mut io::stdout());
}

73
src/cli/commands/edit.rs Normal file
View file

@ -0,0 +1,73 @@
use std::fs;
use std::process::Command;
use walkdir::WalkDir;
use crate::config::Settings;
use crate::error::StreamdError;
use crate::extract::parse_markdown_file;
use crate::localize::{localize_stream_file, TaskConfiguration};
use crate::models::LocalizedShard;
fn all_files() -> Result<Vec<LocalizedShard>, StreamdError> {
let settings = Settings::load()?;
let mut shards = Vec::new();
for entry in WalkDir::new(&settings.base_folder)
.max_depth(1)
.into_iter()
.filter_map(|e| e.ok())
{
let path = entry.path();
if path.extension().map(|e| e == "md").unwrap_or(false) {
let file_name = path.to_string_lossy().to_string();
let content = fs::read_to_string(path)?;
let stream_file = parse_markdown_file(&file_name, &content);
if let Ok(shard) = localize_stream_file(&stream_file, &TaskConfiguration) {
shards.push(shard);
}
}
}
Ok(shards)
}
pub fn run(number: i32) -> Result<(), StreamdError> {
let all_shards = all_files()?;
// Sort by moment (timestamp)
let mut sorted_shards = all_shards;
sorted_shards.sort_by_key(|s| s.moment);
if sorted_shards.is_empty() {
return Err(StreamdError::ConfigError("No files found".to_string()));
}
let selected_index = if number >= 0 {
// 0 = most recent, 1 = second most recent, etc.
let idx = sorted_shards.len() as i32 - number;
if idx < 0 {
return Err(StreamdError::ConfigError(
"Argument out of range".to_string(),
));
}
idx as usize
} else {
// -1 = oldest, -2 = second oldest, etc.
let idx = (-number - 1) as usize;
if idx >= sorted_shards.len() {
return Err(StreamdError::ConfigError(
"Argument out of range".to_string(),
));
}
idx
};
if let Some(file_path) = sorted_shards[selected_index].location.get("file") {
let editor = std::env::var("EDITOR").unwrap_or_else(|_| "vi".to_string());
Command::new(&editor).arg(file_path).status()?;
}
Ok(())
}

5
src/cli/commands/mod.rs Normal file
View file

@ -0,0 +1,5 @@
pub mod completions;
pub mod edit;
pub mod new;
pub mod timesheet;
pub mod todo;

60
src/cli/commands/new.rs Normal file
View file

@ -0,0 +1,60 @@
use std::fs;
use std::io::Write;
use std::path::Path;
use std::process::Command;
use chrono::Local;
use crate::config::Settings;
use crate::error::StreamdError;
use crate::extract::parse_markdown_file;
pub fn run() -> Result<(), StreamdError> {
let settings = Settings::load()?;
let streamd_directory = &settings.base_folder;
let timestamp = Local::now().format("%Y%m%d-%H%M%S").to_string();
let preliminary_file_name = format!("{}_wip.md", timestamp);
let preliminary_path = Path::new(streamd_directory).join(&preliminary_file_name);
// Create initial file with heading
let content = "# ";
let mut file = fs::File::create(&preliminary_path)?;
file.write_all(content.as_bytes())?;
drop(file);
// Open in editor
let editor = std::env::var("EDITOR").unwrap_or_else(|_| "vi".to_string());
let status = Command::new(&editor).arg(&preliminary_path).status()?;
if !status.success() {
return Err(StreamdError::IoError(std::io::Error::other(
"Editor exited with non-zero status",
)));
}
// Read the edited content
let edited_content = fs::read_to_string(&preliminary_path)?;
let parsed_content =
parse_markdown_file(preliminary_path.to_string_lossy().as_ref(), &edited_content);
// Determine final filename based on markers
let final_file_name = if let Some(ref shard) = parsed_content.shard {
if !shard.markers.is_empty() {
format!("{} {}.md", timestamp, shard.markers.join(" "))
} else {
format!("{}.md", timestamp)
}
} else {
format!("{}.md", timestamp)
};
let final_path = Path::new(streamd_directory).join(&final_file_name);
// Rename the file
fs::rename(&preliminary_path, &final_path)?;
println!("Saved as {}", final_file_name);
Ok(())
}

View file

@ -0,0 +1,52 @@
use std::fs;
use walkdir::WalkDir;
use crate::config::Settings;
use crate::error::StreamdError;
use crate::extract::parse_markdown_file;
use crate::localize::localize_stream_file;
use crate::models::LocalizedShard;
use crate::timesheet::{extract_timesheets, BasicTimesheetConfiguration};
fn all_files() -> Result<Vec<LocalizedShard>, StreamdError> {
let settings = Settings::load()?;
let mut shards = Vec::new();
for entry in WalkDir::new(&settings.base_folder)
.max_depth(1)
.into_iter()
.filter_map(|e| e.ok())
{
let path = entry.path();
if path.extension().map(|e| e == "md").unwrap_or(false) {
let file_name = path.to_string_lossy().to_string();
let content = fs::read_to_string(path)?;
let stream_file = parse_markdown_file(&file_name, &content);
if let Ok(shard) = localize_stream_file(&stream_file, &BasicTimesheetConfiguration) {
shards.push(shard);
}
}
}
Ok(shards)
}
pub fn run() -> Result<(), StreamdError> {
let all_shards = all_files()?;
let mut sheets = extract_timesheets(&all_shards)?;
sheets.sort_by_key(|s| s.date);
for sheet in sheets {
println!("{}", sheet.date);
let times: Vec<String> = sheet
.timecards
.iter()
.map(|card| format!("{},{}", card.from_time, card.to_time))
.collect();
println!("{}", times.join(","));
}
Ok(())
}

56
src/cli/commands/todo.rs Normal file
View file

@ -0,0 +1,56 @@
use std::fs;
use walkdir::WalkDir;
use crate::config::Settings;
use crate::error::StreamdError;
use crate::extract::parse_markdown_file;
use crate::localize::{localize_stream_file, TaskConfiguration};
use crate::models::LocalizedShard;
use crate::query::find_shard_by_position;
fn all_files() -> Result<Vec<LocalizedShard>, StreamdError> {
let settings = Settings::load()?;
let mut shards = Vec::new();
for entry in WalkDir::new(&settings.base_folder)
.max_depth(1)
.into_iter()
.filter_map(|e| e.ok())
{
let path = entry.path();
if path.extension().map(|e| e == "md").unwrap_or(false) {
let file_name = path.to_string_lossy().to_string();
let content = fs::read_to_string(path)?;
let stream_file = parse_markdown_file(&file_name, &content);
if let Ok(shard) = localize_stream_file(&stream_file, &TaskConfiguration) {
shards.push(shard);
}
}
}
Ok(shards)
}
pub fn run() -> Result<(), StreamdError> {
let all_shards = all_files()?;
for task_shard in find_shard_by_position(&all_shards, "task", "open") {
if let Some(file_path) = task_shard.location.get("file") {
let content = fs::read_to_string(file_path)?;
let lines: Vec<&str> = content.lines().collect();
let start = task_shard.start_line.saturating_sub(1);
let end = std::cmp::min(task_shard.end_line, lines.len());
println!("--- {}:{} ---", file_path, task_shard.start_line);
for line in &lines[start..end] {
println!("{}", line);
}
println!();
}
}
Ok(())
}

4
src/cli/mod.rs Normal file
View file

@ -0,0 +1,4 @@
pub mod args;
pub mod commands;
pub use args::{Cli, Commands};

44
src/config.rs Normal file
View file

@ -0,0 +1,44 @@
use directories::ProjectDirs;
use serde::{Deserialize, Serialize};
use std::env;
use std::fs;
use std::path::PathBuf;
use crate::error::StreamdError;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Settings {
pub base_folder: String,
}
impl Default for Settings {
fn default() -> Self {
Self {
base_folder: env::current_dir()
.map(|p| p.to_string_lossy().to_string())
.unwrap_or_else(|_| ".".to_string()),
}
}
}
impl Settings {
pub fn load() -> Result<Self, StreamdError> {
let config_path = Self::config_path();
if config_path.exists() {
let content = fs::read_to_string(&config_path)?;
let settings: Settings = serde_yaml::from_str(&content)?;
Ok(settings)
} else {
Ok(Settings::default())
}
}
fn config_path() -> PathBuf {
if let Some(proj_dirs) = ProjectDirs::from("", "", "streamd") {
proj_dirs.config_dir().join("config.yaml")
} else {
PathBuf::from("~/.config/streamd/config.yaml")
}
}
}

25
src/error.rs Normal file
View file

@ -0,0 +1,25 @@
use thiserror::Error;
#[derive(Error, Debug)]
pub enum StreamdError {
#[error("Could not extract date from file name: {0}")]
DateExtractionError(String),
#[error("Timesheet error: {0}")]
TimesheetError(String),
#[error("Configuration error: {0}")]
ConfigError(String),
#[error("IO error: {0}")]
IoError(#[from] std::io::Error),
#[error("YAML error: {0}")]
YamlError(#[from] serde_yaml::Error),
}
impl From<StreamdError> for miette::Report {
fn from(err: StreamdError) -> Self {
miette::Report::msg(err.to_string())
}
}

5
src/extract/mod.rs Normal file
View file

@ -0,0 +1,5 @@
mod parser;
mod tag_extraction;
pub use parser::parse_markdown_file;
pub use tag_extraction::{extract_markers_and_tags, has_markers};

739
src/extract/parser.rs Normal file
View file

@ -0,0 +1,739 @@
use std::collections::HashMap;
use pulldown_cmark::{Event, HeadingLevel, Options, Parser, Tag, TagEnd};
use crate::extract::tag_extraction::{extract_markers_and_tags, has_markers};
use crate::models::{Shard, StreamFile};
/// Information about a block element.
#[derive(Debug, Clone)]
struct BlockInfo {
start_line: usize,
end_line: usize,
block_type: BlockType,
events: Vec<Event<'static>>,
}
#[derive(Debug, Clone, PartialEq)]
enum BlockType {
Paragraph,
Heading(usize),
List,
ListItem,
CodeBlock,
#[allow(dead_code)]
Other,
}
/// Build a shard, applying simplification rules.
/// If the shard has exactly one child with the same line range and no markers/tags,
/// return that child instead.
fn build_shard(
start_line: usize,
end_line: usize,
markers: Vec<String>,
tags: Vec<String>,
children: Vec<Shard>,
) -> Shard {
if children.len() == 1
&& tags.is_empty()
&& markers.is_empty()
&& children[0].start_line == start_line
&& children[0].end_line == end_line
{
return children.into_iter().next().unwrap();
}
Shard {
markers,
tags,
start_line,
end_line,
children,
}
}
/// Merge shards where the first one becomes the parent with its markers/tags preserved.
fn merge_into_first_shard(
mut shards: Vec<Shard>,
start_line: usize,
end_line: usize,
additional_tags: Vec<String>,
) -> Shard {
if shards.is_empty() {
return build_shard(start_line, end_line, vec![], additional_tags, vec![]);
}
let mut first = shards.remove(0);
first.start_line = start_line;
first.end_line = end_line;
first.children = shards;
first.tags.extend(additional_tags);
first
}
/// Parse a markdown file into a StreamFile with shard structure.
pub fn parse_markdown_file(file_name: &str, file_content: &str) -> StreamFile {
let line_count = std::cmp::max(file_content.lines().count(), 1);
let end_line = line_count;
// Handle empty file
if file_content.is_empty() {
return StreamFile {
file_name: file_name.to_string(),
shard: Some(Shard::new(1, 1)),
};
}
// Parse the markdown with offset tracking
let mut options = Options::empty();
options.insert(Options::ENABLE_STRIKETHROUGH);
let parser = Parser::new_ext(file_content, options);
// Collect blocks with their line information
let blocks = collect_blocks(file_content, parser);
// Parse into shard structure
let shard = if blocks.is_empty() {
Shard::new(1, end_line)
} else {
parse_header_shards(&blocks, 1, end_line, false).unwrap_or_else(|| Shard::new(1, end_line))
};
StreamFile {
file_name: file_name.to_string(),
shard: Some(shard),
}
}
/// Collect block-level elements from the parser.
fn collect_blocks(content: &str, parser: Parser) -> Vec<BlockInfo> {
let mut blocks = Vec::new();
let mut current_block: Option<BlockInfo> = None;
let _current_events: Vec<Event<'static>> = Vec::new();
let mut depth = 0;
let mut list_items: Vec<BlockInfo> = Vec::new();
let mut in_list = false;
let mut list_start_line = 0;
// Pre-compute line starts for offset-to-line mapping
let line_starts: Vec<usize> = std::iter::once(0)
.chain(content.match_indices('\n').map(|(i, _)| i + 1))
.collect();
let offset_to_line =
|offset: usize| -> usize { line_starts.partition_point(|&start| start <= offset) };
for (event, range) in parser.into_offset_iter() {
let line = offset_to_line(range.start);
match &event {
Event::Start(Tag::Paragraph) => {
if depth == 0 {
current_block = Some(BlockInfo {
start_line: line,
end_line: line,
block_type: BlockType::Paragraph,
events: Vec::new(),
});
}
depth += 1;
if let Some(ref mut block) = current_block {
block.events.push(event.clone().into_static());
}
}
Event::End(TagEnd::Paragraph) => {
depth -= 1;
if let Some(ref mut block) = current_block {
block.events.push(event.clone().into_static());
block.end_line = line;
}
if depth == 0 {
if let Some(block) = current_block.take() {
if in_list {
list_items.push(block);
} else {
blocks.push(block);
}
}
}
}
Event::Start(Tag::Heading { level, .. }) => {
let heading_level = heading_level_to_usize(*level);
if depth == 0 {
current_block = Some(BlockInfo {
start_line: line,
end_line: line,
block_type: BlockType::Heading(heading_level),
events: Vec::new(),
});
}
depth += 1;
if let Some(ref mut block) = current_block {
block.events.push(event.clone().into_static());
}
}
Event::End(TagEnd::Heading(_)) => {
depth -= 1;
if let Some(ref mut block) = current_block {
block.events.push(event.clone().into_static());
block.end_line = line;
}
if depth == 0 {
if let Some(block) = current_block.take() {
blocks.push(block);
}
}
}
Event::Start(Tag::List(_)) => {
if !in_list {
in_list = true;
list_start_line = line;
list_items.clear();
}
depth += 1;
}
Event::End(TagEnd::List(_)) => {
depth -= 1;
if depth == 0 && in_list {
in_list = false;
// Create a list block containing all list items
if !list_items.is_empty() {
blocks.push(BlockInfo {
start_line: list_start_line,
end_line: line,
block_type: BlockType::List,
events: vec![], // List events are handled through list_items
});
// Store list items for later processing
for item in list_items.drain(..) {
blocks.push(BlockInfo {
block_type: BlockType::ListItem,
..item
});
}
}
}
}
Event::Start(Tag::Item) => {
if in_list {
current_block = Some(BlockInfo {
start_line: line,
end_line: line,
block_type: BlockType::ListItem,
events: Vec::new(),
});
}
}
Event::End(TagEnd::Item) => {
if let Some(ref mut block) = current_block {
block.end_line = line;
}
if let Some(block) = current_block.take() {
list_items.push(block);
}
}
Event::Start(Tag::CodeBlock(_)) => {
if depth == 0 {
current_block = Some(BlockInfo {
start_line: line,
end_line: line,
block_type: BlockType::CodeBlock,
events: Vec::new(),
});
}
depth += 1;
if let Some(ref mut block) = current_block {
block.events.push(event.clone().into_static());
}
}
Event::End(TagEnd::CodeBlock) => {
depth -= 1;
if let Some(ref mut block) = current_block {
block.events.push(event.clone().into_static());
block.end_line = line;
}
if depth == 0 {
if let Some(block) = current_block.take() {
blocks.push(block);
}
}
}
_ => {
if let Some(ref mut block) = current_block {
block.events.push(event.clone().into_static());
}
}
}
}
blocks
}
fn heading_level_to_usize(level: HeadingLevel) -> usize {
match level {
HeadingLevel::H1 => 1,
HeadingLevel::H2 => 2,
HeadingLevel::H3 => 3,
HeadingLevel::H4 => 4,
HeadingLevel::H5 => 5,
HeadingLevel::H6 => 6,
}
}
/// Check if a block has markers.
fn block_has_markers(block: &BlockInfo) -> bool {
has_markers(block.events.iter().cloned())
}
/// Extract markers and tags from a block.
fn extract_block_markers_and_tags(block: &BlockInfo) -> (Vec<String>, Vec<String>) {
extract_markers_and_tags(block.events.iter().cloned())
}
/// Find positions of paragraph blocks that have markers.
fn find_paragraph_shard_positions(blocks: &[BlockInfo]) -> Vec<usize> {
blocks
.iter()
.enumerate()
.filter(|(_, block)| block.block_type == BlockType::Paragraph && block_has_markers(block))
.map(|(i, _)| i)
.collect()
}
/// Find positions of headings at a specific level.
fn find_headings_by_level(blocks: &[BlockInfo], level: usize) -> Vec<usize> {
blocks
.iter()
.enumerate()
.filter(|(_, block)| matches!(block.block_type, BlockType::Heading(l) if l == level))
.map(|(i, _)| i)
.collect()
}
/// Calculate the heading level to split on for the next parsing step.
fn calculate_heading_level_for_next_split(blocks: &[BlockInfo]) -> Option<usize> {
// Find heading levels that have markers (excluding first block)
let levels_with_markers: Vec<usize> = blocks[1..]
.iter()
.filter_map(|block| {
if let BlockType::Heading(level) = block.block_type {
if block_has_markers(block) {
return Some(level);
}
}
None
})
.collect();
if levels_with_markers.is_empty() {
return None;
}
// Count headings at each level
let mut level_counts: HashMap<usize, usize> = HashMap::new();
for block in blocks {
if let BlockType::Heading(level) = block.block_type {
*level_counts.entry(level).or_insert(0) += 1;
}
}
// Return the minimum level that either:
// - Has count >= 2
// - Has a marker (excluding first block)
let levels_with_multiple: Vec<usize> = level_counts
.into_iter()
.filter(|(_, count)| *count >= 2)
.map(|(level, _)| level)
.collect();
let mut candidates = levels_with_multiple;
candidates.extend(levels_with_markers);
candidates.into_iter().min()
}
/// Split a slice at the given positions.
fn split_at<T: Clone>(items: &[T], positions: &[usize]) -> Vec<Vec<T>> {
let mut all_positions: Vec<usize> = vec![0];
all_positions.extend(positions.iter().cloned());
all_positions.push(items.len());
all_positions.sort();
all_positions.dedup();
all_positions
.windows(2)
.map(|window| items[window[0]..window[1]].to_vec())
.filter(|v| !v.is_empty())
.collect()
}
/// Parse blocks into shard hierarchy based on headings.
fn parse_header_shards(
blocks: &[BlockInfo],
start_line: usize,
end_line: usize,
use_first_child_as_header: bool,
) -> Option<Shard> {
if blocks.is_empty() {
return Some(build_shard(start_line, end_line, vec![], vec![], vec![]));
}
let split_at_heading_level = calculate_heading_level_for_next_split(blocks);
if split_at_heading_level.is_none() {
return parse_multiple_block_shards(blocks, start_line, end_line, true).0;
}
let heading_level = split_at_heading_level.unwrap();
let heading_positions = find_headings_by_level(blocks, heading_level);
let block_groups = split_at(blocks, &heading_positions);
let mut children = Vec::new();
for (i, group) in block_groups.iter().enumerate() {
if group.is_empty() {
continue;
}
let child_start_line = group[0].start_line;
let child_end_line = if i + 1 < block_groups.len() && !block_groups[i + 1].is_empty() {
block_groups[i + 1][0].start_line - 1
} else {
end_line
};
if let Some(child_shard) = parse_header_shards(
group,
child_start_line,
child_end_line,
i > 0 || heading_positions.contains(&0),
) {
children.push(child_shard);
}
}
if use_first_child_as_header && !children.is_empty() {
Some(merge_into_first_shard(
children,
start_line,
end_line,
vec![],
))
} else {
Some(build_shard(start_line, end_line, vec![], vec![], children))
}
}
/// Parse multiple blocks into shards.
fn parse_multiple_block_shards(
blocks: &[BlockInfo],
start_line: usize,
end_line: usize,
enforce_shard: bool,
) -> (Option<Shard>, Vec<String>) {
if blocks.is_empty() {
if enforce_shard {
return (
Some(build_shard(start_line, end_line, vec![], vec![], vec![])),
vec![],
);
}
return (None, vec![]);
}
let is_first_block_heading =
matches!(blocks[0].block_type, BlockType::Heading(_)) && block_has_markers(&blocks[0]);
let paragraph_positions = find_paragraph_shard_positions(blocks);
let mut children = Vec::new();
let mut tags = Vec::new();
let mut is_first_block_only_with_marker = false;
for (i, block) in blocks.iter().enumerate() {
if paragraph_positions.contains(&i) {
is_first_block_only_with_marker = i == 0;
}
let child_start_line = block.start_line;
let child_end_line = if i + 1 < blocks.len() {
blocks[i + 1].start_line - 1
} else {
end_line
};
let (child_shard, child_tags) =
parse_single_block_shard(block, child_start_line, child_end_line);
if let Some(shard) = child_shard {
children.push(shard);
}
tags.extend(child_tags);
}
if children.is_empty() && !enforce_shard {
return (None, tags);
}
if is_first_block_heading || is_first_block_only_with_marker {
(
Some(merge_into_first_shard(children, start_line, end_line, tags)),
vec![],
)
} else {
(
Some(build_shard(start_line, end_line, vec![], tags, children)),
vec![],
)
}
}
/// Parse a single block into a shard.
fn parse_single_block_shard(
block: &BlockInfo,
start_line: usize,
end_line: usize,
) -> (Option<Shard>, Vec<String>) {
match block.block_type {
BlockType::Paragraph | BlockType::Heading(_) => {
let (markers, tags) = extract_block_markers_and_tags(block);
if markers.is_empty() {
(None, tags)
} else {
(
Some(build_shard(start_line, end_line, markers, tags, vec![])),
vec![],
)
}
}
BlockType::List | BlockType::ListItem => {
// List handling is complex - for now, extract any markers/tags
let (markers, tags) = extract_block_markers_and_tags(block);
if markers.is_empty() {
(None, tags)
} else {
(
Some(build_shard(start_line, end_line, markers, tags, vec![])),
vec![],
)
}
}
_ => (None, vec![]),
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_file_name() -> String {
"test.md".to_string()
}
#[test]
fn test_parse_empty_file() {
let result = parse_markdown_file(&make_file_name(), "");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard::new(1, 1)),
}
);
}
#[test]
fn test_parse_basic_one_line_file() {
let result = parse_markdown_file(&make_file_name(), "Hello World");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard::new(1, 1)),
}
);
}
#[test]
fn test_parse_basic_multi_line_file() {
let result = parse_markdown_file(&make_file_name(), "Hello World\n\nHello again!");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard::new(1, 3)),
}
);
}
#[test]
fn test_parse_single_line_with_tag() {
let result = parse_markdown_file(&make_file_name(), "@Tag Hello World");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard {
markers: vec!["Tag".to_string()],
tags: vec![],
start_line: 1,
end_line: 1,
children: vec![],
}),
}
);
}
#[test]
fn test_parse_single_line_with_two_tags() {
let result = parse_markdown_file(&make_file_name(), "@Marker1 @Marker2 Hello World");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard {
markers: vec!["Marker1".to_string(), "Marker2".to_string()],
tags: vec![],
start_line: 1,
end_line: 1,
children: vec![],
}),
}
);
}
#[test]
fn test_parse_single_line_with_two_tags_and_misplaced_tag() {
let result = parse_markdown_file(&make_file_name(), "@Tag1 @Tag2 Hello World @Tag3");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard {
markers: vec!["Tag1".to_string(), "Tag2".to_string()],
tags: vec!["Tag3".to_string()],
start_line: 1,
end_line: 1,
children: vec![],
}),
}
);
}
#[test]
fn test_parse_header_without_markers() {
let result = parse_markdown_file(&make_file_name(), "# Heading\n\n## Subheading");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard::new(1, 3)),
}
);
}
#[test]
fn test_parse_ignores_tags_in_code() {
let result = parse_markdown_file(&make_file_name(), "```\n@Marker\n```");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard::new(1, 3)),
}
);
}
#[test]
fn test_parse_finds_tags_in_italic_text() {
let result = parse_markdown_file(&make_file_name(), "*@ItalicMarker*");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard {
markers: vec!["ItalicMarker".to_string()],
tags: vec![],
start_line: 1,
end_line: 1,
children: vec![],
}),
}
);
}
#[test]
fn test_parse_finds_tags_in_bold_text() {
let result = parse_markdown_file(&make_file_name(), "**@BoldMarker**");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard {
markers: vec!["BoldMarker".to_string()],
tags: vec![],
start_line: 1,
end_line: 1,
children: vec![],
}),
}
);
}
#[test]
fn test_parse_finds_tags_in_strikethrough_text() {
let result = parse_markdown_file(&make_file_name(), "~~@StrikeMarker~~");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard {
markers: vec!["StrikeMarker".to_string()],
tags: vec![],
start_line: 1,
end_line: 1,
children: vec![],
}),
}
);
}
#[test]
fn test_parse_finds_tags_in_link() {
let result = parse_markdown_file(&make_file_name(), "[@LinkMarker](https://example.com)");
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard {
markers: vec!["LinkMarker".to_string()],
tags: vec![],
start_line: 1,
end_line: 1,
children: vec![],
}),
}
);
}
#[test]
fn test_parse_continues_looking_for_markers_after_first_link_marker() {
let result = parse_markdown_file(
&make_file_name(),
"[@LinkMarker1](https://example.com) [@LinkMarker2](https://example.com)",
);
assert_eq!(
result,
StreamFile {
file_name: make_file_name(),
shard: Some(Shard {
markers: vec!["LinkMarker1".to_string(), "LinkMarker2".to_string()],
tags: vec![],
start_line: 1,
end_line: 1,
children: vec![],
}),
}
);
}
}

View file

@ -0,0 +1,219 @@
use once_cell::sync::Lazy;
use pulldown_cmark::{Event, Tag, TagEnd};
use regex::Regex;
/// Regex pattern for matching @Tags.
/// Matches @ followed by any characters except whitespace, *, `, ~, [, ]
static TAG_PATTERN: Lazy<Regex> = Lazy::new(|| Regex::new(r"@([^\s*`~\[\]]+)").unwrap());
/// Token type for tag extraction state machine.
#[derive(Debug, Clone)]
enum Token {
Tag(String),
Content,
Whitespace,
}
/// Tokenizes text content into Tags, Content, and Whitespace tokens.
fn tokenize(text: &str) -> Vec<Token> {
let mut tokens = Vec::new();
let mut last_end = 0;
for mat in TAG_PATTERN.find_iter(text) {
// Handle content before the match
let before = &text[last_end..mat.start()];
if !before.is_empty() {
if before.chars().all(|c| c.is_whitespace()) {
tokens.push(Token::Whitespace);
} else {
tokens.push(Token::Content);
}
}
// Extract the tag name (without the @)
let tag_name = &text[mat.start() + 1..mat.end()];
tokens.push(Token::Tag(tag_name.to_string()));
last_end = mat.end();
}
// Handle remaining content after last match
if last_end < text.len() {
let remaining = &text[last_end..];
if !remaining.is_empty() {
if remaining.chars().all(|c| c.is_whitespace()) {
tokens.push(Token::Whitespace);
} else {
tokens.push(Token::Content);
}
}
}
tokens
}
/// Extract markers and tags from a sequence of pulldown-cmark events.
///
/// Markers are @-prefixed identifiers that appear before any non-whitespace content.
/// Tags are @-prefixed identifiers that appear after content has started.
///
/// Returns (markers, tags).
pub fn extract_markers_and_tags<'a>(
events: impl Iterator<Item = Event<'a>>,
) -> (Vec<String>, Vec<String>) {
let mut markers = Vec::new();
let mut tags = Vec::new();
let mut boundary_crossed = false;
let mut in_code = false;
for event in events {
match event {
Event::Start(Tag::CodeBlock(_)) | Event::Start(Tag::MetadataBlock(_)) => {
in_code = true;
}
Event::End(TagEnd::CodeBlock) | Event::End(TagEnd::MetadataBlock(_)) => {
in_code = false;
}
Event::Code(_) => {
// Inline code is a content boundary but we don't extract tags from it
boundary_crossed = true;
}
Event::Text(text) | Event::InlineHtml(text) if !in_code => {
for token in tokenize(&text) {
match token {
Token::Whitespace => {}
Token::Tag(name) => {
if boundary_crossed {
tags.push(name);
} else {
markers.push(name);
}
}
Token::Content => {
boundary_crossed = true;
}
}
}
}
_ => {}
}
}
(markers, tags)
}
/// Check if the events contain any markers (tags before content).
pub fn has_markers<'a>(events: impl Iterator<Item = Event<'a>>) -> bool {
let (markers, _) = extract_markers_and_tags(events);
!markers.is_empty()
}
#[cfg(test)]
mod tests {
use super::*;
use pulldown_cmark::Parser;
fn extract_from_text(text: &str) -> (Vec<String>, Vec<String>) {
let mut options = pulldown_cmark::Options::empty();
options.insert(pulldown_cmark::Options::ENABLE_STRIKETHROUGH);
let parser = Parser::new_ext(text, options);
extract_markers_and_tags(parser)
}
#[test]
fn test_extract_single_marker() {
let (markers, tags) = extract_from_text("@Tag Hello World");
assert_eq!(markers, vec!["Tag"]);
assert!(tags.is_empty());
}
#[test]
fn test_extract_two_markers() {
let (markers, tags) = extract_from_text("@Marker1 @Marker2 Hello World");
assert_eq!(markers, vec!["Marker1", "Marker2"]);
assert!(tags.is_empty());
}
#[test]
fn test_extract_markers_and_tags() {
let (markers, tags) = extract_from_text("@Tag1 @Tag2 Hello World @Tag3");
assert_eq!(markers, vec!["Tag1", "Tag2"]);
assert_eq!(tags, vec!["Tag3"]);
}
#[test]
fn test_extract_inner_tags() {
let (markers, tags) = extract_from_text("Hello @Tag1 World!");
assert!(markers.is_empty());
assert_eq!(tags, vec!["Tag1"]);
}
#[test]
fn test_extract_ignores_code_blocks() {
let (markers, tags) = extract_from_text("```\n@Marker\n```");
assert!(markers.is_empty());
assert!(tags.is_empty());
}
#[test]
fn test_extract_italic_marker() {
let (markers, tags) = extract_from_text("*@ItalicMarker*");
assert_eq!(markers, vec!["ItalicMarker"]);
assert!(tags.is_empty());
}
#[test]
fn test_extract_bold_marker() {
let (markers, tags) = extract_from_text("**@BoldMarker**");
assert_eq!(markers, vec!["BoldMarker"]);
assert!(tags.is_empty());
}
#[test]
fn test_extract_strikethrough_marker() {
let (markers, tags) = extract_from_text("~~@StrikeMarker~~");
assert_eq!(markers, vec!["StrikeMarker"]);
assert!(tags.is_empty());
}
#[test]
fn test_extract_link_marker() {
let (markers, tags) = extract_from_text("[@LinkMarker](https://example.com)");
assert_eq!(markers, vec!["LinkMarker"]);
assert!(tags.is_empty());
}
#[test]
fn test_extract_multiple_link_markers() {
let (markers, tags) = extract_from_text(
"[@LinkMarker1](https://example.com) [@LinkMarker2](https://example.com)",
);
assert_eq!(markers, vec!["LinkMarker1", "LinkMarker2"]);
assert!(tags.is_empty());
}
#[test]
fn test_has_markers_true() {
let parser = Parser::new("@Tag Hello");
assert!(has_markers(parser));
}
#[test]
fn test_has_markers_false() {
let parser = Parser::new("Hello @Tag");
assert!(!has_markers(parser));
}
#[test]
fn test_empty_text() {
let (markers, tags) = extract_from_text("");
assert!(markers.is_empty());
assert!(tags.is_empty());
}
#[test]
fn test_no_tags() {
let (markers, tags) = extract_from_text("Hello World");
assert!(markers.is_empty());
assert!(tags.is_empty());
}
}

14
src/lib.rs Normal file
View file

@ -0,0 +1,14 @@
pub mod cli;
pub mod config;
pub mod error;
pub mod extract;
pub mod localize;
pub mod models;
pub mod query;
pub mod timesheet;
pub use error::StreamdError;
pub use models::{
Dimension, LocalizedShard, Marker, MarkerPlacement, RepositoryConfiguration, Shard,
SpecialDayType, StreamFile, Timecard, Timesheet,
};

View file

@ -0,0 +1,448 @@
use std::collections::BTreeSet;
use indexmap::IndexMap;
use crate::models::{Dimension, Marker, MarkerPlacement, RepositoryConfiguration};
/// Merge two dimensions, with the second taking precedence.
///
/// - display_name: second wins if non-empty, else base
/// - comment: second wins if not None, else base
/// - propagate: second wins if explicitly set, else base
pub fn merge_single_dimension(base: &Dimension, second: &Dimension) -> Dimension {
Dimension {
display_name: if second.display_name.is_empty() {
base.display_name.clone()
} else {
second.display_name.clone()
},
comment: if second.comment.is_some() {
second.comment.clone()
} else {
base.comment.clone()
},
propagate: if second.propagate_was_set {
second.propagate
} else {
base.propagate
},
propagate_was_set: second.propagate_was_set || base.propagate_was_set,
}
}
/// Merge two dimension maps.
pub fn merge_dimensions(
base: &IndexMap<String, Dimension>,
second: &IndexMap<String, Dimension>,
) -> IndexMap<String, Dimension> {
let mut merged = base.clone();
for (key, second_dim) in second {
if let Some(base_dim) = merged.get(key) {
merged.insert(key.clone(), merge_single_dimension(base_dim, second_dim));
} else {
merged.insert(key.clone(), second_dim.clone());
}
}
merged
}
/// Create a placement identity tuple for deduplication.
/// We use BTreeSet to make it hashable and order-independent.
fn placement_identity(p: &MarkerPlacement) -> (BTreeSet<String>, String) {
(p.if_with.iter().cloned().collect(), p.dimension.clone())
}
/// Merge two markers, with the second taking precedence.
pub fn merge_single_marker(base: &Marker, second: &Marker) -> Marker {
let display_name = if second.display_name.is_empty() {
base.display_name.clone()
} else {
second.display_name.clone()
};
let mut merged_placements: Vec<MarkerPlacement> = Vec::new();
let mut seen: IndexMap<(BTreeSet<String>, String), usize> = IndexMap::new();
for placement in &base.placements {
let ident = placement_identity(placement);
seen.insert(ident, merged_placements.len());
merged_placements.push(placement.clone());
}
for placement in &second.placements {
let ident = placement_identity(placement);
if let Some(&idx) = seen.get(&ident) {
merged_placements[idx] = placement.clone();
} else {
seen.insert(ident, merged_placements.len());
merged_placements.push(placement.clone());
}
}
Marker {
display_name,
placements: merged_placements,
}
}
/// Merge two marker maps.
pub fn merge_markers(
base: &IndexMap<String, Marker>,
second: &IndexMap<String, Marker>,
) -> IndexMap<String, Marker> {
let mut merged = base.clone();
for (key, second_marker) in second {
if let Some(base_marker) = merged.get(key) {
merged.insert(key.clone(), merge_single_marker(base_marker, second_marker));
} else {
merged.insert(key.clone(), second_marker.clone());
}
}
merged
}
/// Merge two repository configurations.
pub fn merge_repository_configuration(
base: &RepositoryConfiguration,
second: &RepositoryConfiguration,
) -> RepositoryConfiguration {
RepositoryConfiguration {
dimensions: merge_dimensions(&base.dimensions, &second.dimensions),
markers: merge_markers(&base.markers, &second.markers),
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_second_overrides_display_name_when_non_empty() {
let base = Dimension::new("Base")
.with_comment("c1")
.with_propagate(true);
let second = Dimension::new("Second")
.with_comment("c2")
.with_propagate(false);
let merged = merge_single_dimension(&base, &second);
assert_eq!(merged.display_name, "Second");
assert_eq!(merged.comment, Some("c2".to_string()));
assert!(!merged.propagate);
}
#[test]
fn test_second_empty_display_name_falls_back_to_base() {
let base = Dimension::new("Base")
.with_comment("c1")
.with_propagate(true);
let second = Dimension::new("").with_comment("c2").with_propagate(false);
let merged = merge_single_dimension(&base, &second);
assert_eq!(merged.display_name, "Base");
assert_eq!(merged.comment, Some("c2".to_string()));
assert!(!merged.propagate);
}
#[test]
fn test_second_comment_none_does_not_erase_base_comment() {
let base = Dimension::new("Base")
.with_comment("keep")
.with_propagate(true);
let mut second = Dimension::new("Second");
second.propagate = false;
second.propagate_was_set = true;
let merged = merge_single_dimension(&base, &second);
assert_eq!(merged.display_name, "Second");
assert_eq!(merged.comment, Some("keep".to_string()));
}
#[test]
fn test_second_comment_non_none_overrides_base_comment() {
let base = Dimension::new("Base")
.with_comment("c1")
.with_propagate(true);
let second = Dimension::new("Second")
.with_comment("c2")
.with_propagate(true);
let merged = merge_single_dimension(&base, &second);
assert_eq!(merged.comment, Some("c2".to_string()));
}
#[test]
fn test_second_propagate_overrides_base_when_provided() {
let base = Dimension::new("Base")
.with_comment("c1")
.with_propagate(true);
let second = Dimension::new("Second")
.with_comment("c2")
.with_propagate(false);
let merged = merge_single_dimension(&base, &second);
assert!(!merged.propagate);
}
#[test]
fn test_propagate_merging_retains_base_when_second_not_provided() {
let base = Dimension::new("Base")
.with_comment("c1")
.with_propagate(true);
let second = Dimension::new("Second").with_comment("c2");
let merged = merge_single_dimension(&base, &second);
assert!(merged.propagate);
}
#[test]
fn test_adds_new_keys_from_second() {
let mut base = IndexMap::new();
base.insert("a".to_string(), Dimension::new("A").with_propagate(true));
let mut second = IndexMap::new();
second.insert("b".to_string(), Dimension::new("B").with_propagate(false));
let merged = merge_dimensions(&base, &second);
assert!(merged.contains_key("a"));
assert!(merged.contains_key("b"));
assert_eq!(merged["a"].display_name, "A");
assert_eq!(merged["b"].display_name, "B");
}
#[test]
fn test_merges_existing_keys() {
let mut base = IndexMap::new();
base.insert(
"a".to_string(),
Dimension::new("A").with_comment("c1").with_propagate(true),
);
let mut second = IndexMap::new();
second.insert("a".to_string(), Dimension::new("A2").with_propagate(false));
let merged = merge_dimensions(&base, &second);
assert_eq!(merged["a"].display_name, "A2");
assert_eq!(merged["a"].comment, Some("c1".to_string()));
assert!(!merged["a"].propagate);
}
#[test]
fn test_does_not_mutate_inputs() {
let mut base = IndexMap::new();
base.insert(
"a".to_string(),
Dimension::new("A").with_comment("c1").with_propagate(true),
);
let mut second = IndexMap::new();
second.insert(
"b".to_string(),
Dimension::new("B").with_comment("c2").with_propagate(false),
);
let merged = merge_dimensions(&base, &second);
assert!(!base.contains_key("b"));
assert!(!second.contains_key("a"));
assert!(merged.contains_key("a"));
assert!(merged.contains_key("b"));
}
#[test]
fn test_second_marker_overrides_display_name_when_non_empty() {
let base = Marker::new("Base").with_placements(vec![MarkerPlacement::new("project")]);
let second = Marker::new("Second")
.with_placements(vec![MarkerPlacement::new("timesheet").with_value("coding")]);
let merged = merge_single_marker(&base, &second);
assert_eq!(merged.display_name, "Second");
assert_eq!(merged.placements.len(), 2);
assert_eq!(merged.placements[0].dimension, "project");
assert_eq!(merged.placements[1].dimension, "timesheet");
}
#[test]
fn test_second_marker_empty_display_name_falls_back_to_base() {
let base = Marker::new("Base").with_placements(vec![]);
let second = Marker::new("").with_placements(vec![]);
let merged = merge_single_marker(&base, &second);
assert_eq!(merged.display_name, "Base");
}
#[test]
fn test_appends_new_placements() {
let base = Marker::new("Base").with_placements(vec![MarkerPlacement::new("project")]);
let second = Marker::new("Second").with_placements(vec![MarkerPlacement::new("timesheet")
.with_if_with(vec!["Timesheet"])
.with_value("x")]);
let merged = merge_single_marker(&base, &second);
assert_eq!(merged.placements.len(), 2);
assert_eq!(merged.placements[0].dimension, "project");
assert_eq!(merged.placements[1].dimension, "timesheet");
}
#[test]
fn test_deduplicates_by_identity_and_second_overrides_base() {
let base = Marker::new("Base").with_placements(vec![
MarkerPlacement::new("d")
.with_if_with(vec!["A"])
.with_value("v"),
MarkerPlacement::new("d")
.with_if_with(vec!["B"])
.with_value("v2"),
]);
let second = Marker::new("Second").with_placements(vec![
MarkerPlacement::new("d")
.with_if_with(vec!["A"])
.with_value("v"),
MarkerPlacement::new("d")
.with_if_with(vec!["C"])
.with_value("v3"),
]);
let merged = merge_single_marker(&base, &second);
assert_eq!(merged.placements.len(), 3);
// First placement (A, d) should be from second
assert_eq!(
merged.placements[0].if_with.iter().collect::<Vec<_>>(),
vec!["A"]
);
// Second placement (B, d) should be from base
assert_eq!(
merged.placements[1].if_with.iter().collect::<Vec<_>>(),
vec!["B"]
);
// Third placement (C, d) should be from second
assert_eq!(
merged.placements[2].if_with.iter().collect::<Vec<_>>(),
vec!["C"]
);
}
#[test]
fn test_identity_is_order_insensitive_for_if_with() {
let base = Marker::new("Base").with_placements(vec![MarkerPlacement::new("d")
.with_if_with(vec!["A", "B"])
.with_value("v")]);
let second = Marker::new("Second").with_placements(vec![MarkerPlacement::new("d")
.with_if_with(vec!["B", "A"])
.with_value("v2")]);
let merged = merge_single_marker(&base, &second);
// With if_with as a set, identity is order-insensitive; second overrides base.
assert_eq!(merged.placements.len(), 1);
assert_eq!(merged.placements[0].value, Some("v2".to_string()));
}
#[test]
fn test_adds_new_marker_keys_from_second() {
let mut base = IndexMap::new();
base.insert("M1".to_string(), Marker::new("M1").with_placements(vec![]));
let mut second = IndexMap::new();
second.insert("M2".to_string(), Marker::new("M2").with_placements(vec![]));
let merged = merge_markers(&base, &second);
assert!(merged.contains_key("M1"));
assert!(merged.contains_key("M2"));
}
#[test]
fn test_merges_existing_marker_keys() {
let mut base = IndexMap::new();
base.insert(
"M".to_string(),
Marker::new("Base").with_placements(vec![MarkerPlacement::new("project")]),
);
let mut second = IndexMap::new();
second.insert(
"M".to_string(),
Marker::new("Second").with_placements(vec![MarkerPlacement::new("timesheet")
.with_if_with(vec!["Timesheet"])
.with_value("coding")]),
);
let merged = merge_markers(&base, &second);
assert_eq!(merged["M"].display_name, "Second");
assert_eq!(merged["M"].placements.len(), 2);
}
#[test]
fn test_merge_repository_configuration() {
let base = RepositoryConfiguration::new()
.with_dimension(
"project",
Dimension::new("Project")
.with_comment("c1")
.with_propagate(true),
)
.with_dimension(
"moment",
Dimension::new("Moment")
.with_comment("c2")
.with_propagate(true),
)
.with_marker(
"Streamd",
Marker::new("Streamd").with_placements(vec![MarkerPlacement::new("project")]),
);
let second = RepositoryConfiguration::new()
.with_dimension("project", Dimension::new("Project2").with_propagate(false))
.with_dimension(
"timesheet",
Dimension::new("Timesheet")
.with_comment("c3")
.with_propagate(false),
)
.with_marker(
"Streamd",
Marker::new("Streamd2").with_placements(vec![MarkerPlacement::new("timesheet")
.with_if_with(vec!["Timesheet"])
.with_value("coding")]),
)
.with_marker(
"JobHunting",
Marker::new("JobHunting").with_placements(vec![MarkerPlacement::new("project")]),
);
let merged = merge_repository_configuration(&base, &second);
assert!(merged.dimensions.contains_key("project"));
assert!(merged.dimensions.contains_key("moment"));
assert!(merged.dimensions.contains_key("timesheet"));
assert_eq!(merged.dimensions["project"].display_name, "Project2");
assert_eq!(merged.dimensions["project"].comment, Some("c1".to_string()));
assert!(!merged.dimensions["project"].propagate);
assert_eq!(merged.dimensions["moment"].display_name, "Moment");
assert_eq!(merged.dimensions["timesheet"].display_name, "Timesheet");
assert!(merged.markers.contains_key("Streamd"));
assert!(merged.markers.contains_key("JobHunting"));
assert_eq!(merged.markers["Streamd"].display_name, "Streamd2");
assert_eq!(merged.markers["Streamd"].placements.len(), 2);
}
}

365
src/localize/datetime.rs Normal file
View file

@ -0,0 +1,365 @@
use chrono::{DateTime, NaiveDate, NaiveDateTime, NaiveTime, Utc};
use once_cell::sync::Lazy;
use regex::Regex;
use std::path::Path;
/// Regex for extracting date and optional time from file names.
/// Format: YYYYMMDD or YYYYMMDD-HHMMSS (time can be 4-6 digits)
static FILE_NAME_REGEX: Lazy<Regex> =
Lazy::new(|| Regex::new(r"^(?P<date>\d{8})(?:-(?P<time>\d{4,6}))?.+\.md$").unwrap());
/// Regex for validating datetime marker format (14 digits).
static DATETIME_MARKER_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"^\d{14}$").unwrap());
/// Regex for validating date marker format (8 digits).
static DATE_MARKER_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"^\d{8}$").unwrap());
/// Regex for validating time marker format (6 digits).
static TIME_MARKER_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"^\d{6}$").unwrap());
/// Extract a datetime from a file name in the format YYYYMMDD-HHMMSS.
///
/// The time component is optional and can be 4-6 digits (HHMM, HHMMS, or HHMMSS).
///
/// # Examples
/// - "20230101-123456 Some Text.md" -> DateTime for 2023-01-01 12:34:56
/// - "20230101 Some Text.md" -> DateTime for 2023-01-01 00:00:00
/// - "invalid-file-name.md" -> None
pub fn extract_datetime_from_file_name(file_name: &str) -> Option<DateTime<Utc>> {
let base_name = Path::new(file_name)
.file_name()
.and_then(|s| s.to_str())
.unwrap_or(file_name);
let captures = FILE_NAME_REGEX.captures(base_name)?;
let date_str = captures.name("date")?.as_str();
let time_str = captures.name("time").map(|m| m.as_str()).unwrap_or("");
// Pad time string to 6 digits
let time_str = format!("{:0<6}", time_str);
let datetime_str = format!(
"{} {}:{}:{}",
date_str,
&time_str[0..2],
&time_str[2..4],
&time_str[4..6]
);
NaiveDateTime::parse_from_str(&datetime_str, "%Y%m%d %H:%M:%S")
.ok()
.map(|dt| dt.and_utc())
}
/// Extract a datetime from a marker string in the exact format: YYYYMMDDHHMMSS.
///
/// Returns the parsed datetime if the format matches and values are valid.
pub fn extract_datetime_from_marker(marker: &str) -> Option<DateTime<Utc>> {
if !DATETIME_MARKER_REGEX.is_match(marker) {
return None;
}
NaiveDateTime::parse_from_str(marker, "%Y%m%d%H%M%S")
.ok()
.map(|dt| dt.and_utc())
}
/// Extract a date from a marker string in the exact format: YYYYMMDD.
///
/// Returns the parsed date if the format matches and values are valid.
pub fn extract_date_from_marker(marker: &str) -> Option<NaiveDate> {
if !DATE_MARKER_REGEX.is_match(marker) {
return None;
}
NaiveDate::parse_from_str(marker, "%Y%m%d").ok()
}
/// Extract a time from a marker string in the exact format: HHMMSS.
///
/// Returns the parsed time if the format matches and values are valid.
pub fn extract_time_from_marker(marker: &str) -> Option<NaiveTime> {
if !TIME_MARKER_REGEX.is_match(marker) {
return None;
}
NaiveTime::parse_from_str(marker, "%H%M%S").ok()
}
/// Extract a datetime from a list of markers, using an inherited datetime as fallback.
///
/// The function processes markers in reverse order, allowing later markers to override
/// earlier ones. It combines date-only and time-only markers when both are present.
///
/// Rules:
/// - If a full datetime marker (14 digits) is found, it sets both date and time
/// - If only a date marker is found, the time defaults to midnight
/// - If only a time marker is found, the date is inherited
/// - If no valid markers are found, the inherited datetime is returned
pub fn extract_datetime_from_marker_list(
markers: &[String],
inherited_datetime: DateTime<Utc>,
) -> DateTime<Utc> {
let mut shard_time: Option<NaiveTime> = None;
let mut shard_date: Option<NaiveDate> = None;
// Process markers in reverse order (last wins)
for marker in markers.iter().rev() {
if let Some(time) = extract_time_from_marker(marker) {
shard_time = Some(time);
}
if let Some(date) = extract_date_from_marker(marker) {
shard_date = Some(date);
}
if let Some(datetime) = extract_datetime_from_marker(marker) {
shard_date = Some(datetime.naive_utc().date());
shard_time = Some(datetime.naive_utc().time());
}
}
// Combine date and time, applying defaults as needed
let final_date = shard_date.unwrap_or_else(|| inherited_datetime.naive_utc().date());
let final_time = match (shard_date, shard_time) {
// If we have a date but no time, use midnight
(Some(_), None) => NaiveTime::from_hms_opt(0, 0, 0).unwrap(),
// Otherwise use the shard time or inherit
_ => shard_time.unwrap_or_else(|| inherited_datetime.naive_utc().time()),
};
NaiveDateTime::new(final_date, final_time).and_utc()
}
#[cfg(test)]
mod tests {
use super::*;
use chrono::TimeZone;
#[test]
fn test_extract_date_from_file_name_valid() {
let file_name = "20230101-123456 Some Text.md";
assert_eq!(
extract_datetime_from_file_name(file_name),
Some(Utc.with_ymd_and_hms(2023, 1, 1, 12, 34, 56).unwrap())
);
}
#[test]
fn test_extract_date_from_file_name_invalid() {
let file_name = "invalid-file-name.md";
assert_eq!(extract_datetime_from_file_name(file_name), None);
}
#[test]
fn test_extract_date_from_file_name_without_time() {
let file_name = "20230101 Some Text.md";
assert_eq!(
extract_datetime_from_file_name(file_name),
Some(Utc.with_ymd_and_hms(2023, 1, 1, 0, 0, 0).unwrap())
);
}
#[test]
fn test_extract_date_from_file_name_short_time() {
let file_name = "20230101-1234 Some Text.md";
assert_eq!(
extract_datetime_from_file_name(file_name),
Some(Utc.with_ymd_and_hms(2023, 1, 1, 12, 34, 0).unwrap())
);
}
#[test]
fn test_extract_date_from_file_name_empty_string() {
let file_name = "";
assert_eq!(extract_datetime_from_file_name(file_name), None);
}
#[test]
fn test_extract_date_from_file_name_with_full_path() {
let file_name = "/path/to/20230101-123456 Some Text.md";
assert_eq!(
extract_datetime_from_file_name(file_name),
Some(Utc.with_ymd_and_hms(2023, 1, 1, 12, 34, 56).unwrap())
);
}
#[test]
fn test_extract_datetime_from_marker_valid() {
let marker = "20250101150000";
assert_eq!(
extract_datetime_from_marker(marker),
Some(Utc.with_ymd_and_hms(2025, 1, 1, 15, 0, 0).unwrap())
);
}
#[test]
fn test_extract_datetime_from_marker_invalid_format() {
assert_eq!(extract_datetime_from_marker("2025010115000"), None); // too short
assert_eq!(extract_datetime_from_marker("202501011500000"), None); // too long
assert_eq!(extract_datetime_from_marker("2025-01-01T150000"), None); // separators
assert_eq!(extract_datetime_from_marker("2025010115000a"), None); // non-digit
assert_eq!(extract_datetime_from_marker(""), None);
}
#[test]
fn test_extract_datetime_from_marker_invalid_values() {
assert_eq!(extract_datetime_from_marker("20250230120000"), None); // Feb 30
assert_eq!(extract_datetime_from_marker("20250101126000"), None); // minute 60
assert_eq!(extract_datetime_from_marker("20250101240000"), None); // hour 24
}
#[test]
fn test_extract_date_from_marker_valid() {
let marker = "20250101";
assert_eq!(
extract_date_from_marker(marker),
Some(NaiveDate::from_ymd_opt(2025, 1, 1).unwrap())
);
}
#[test]
fn test_extract_date_from_marker_invalid_format() {
assert_eq!(extract_date_from_marker("2025010"), None); // too short
assert_eq!(extract_date_from_marker("202501011"), None); // too long
assert_eq!(extract_date_from_marker("2025-01-01"), None); // separators
assert_eq!(extract_date_from_marker("2025010a"), None); // non-digit
assert_eq!(extract_date_from_marker(""), None);
}
#[test]
fn test_extract_date_from_marker_invalid_values() {
assert_eq!(extract_date_from_marker("20250230"), None); // Feb 30
assert_eq!(extract_date_from_marker("20251301"), None); // month 13
assert_eq!(extract_date_from_marker("20250132"), None); // day 32
}
#[test]
fn test_extract_time_from_marker_valid() {
let marker = "150000";
assert_eq!(
extract_time_from_marker(marker),
Some(NaiveTime::from_hms_opt(15, 0, 0).unwrap())
);
}
#[test]
fn test_extract_time_from_marker_invalid_format() {
assert_eq!(extract_time_from_marker("15000"), None); // too short
assert_eq!(extract_time_from_marker("1500000"), None); // too long
assert_eq!(extract_time_from_marker("15:00:00"), None); // separators
assert_eq!(extract_time_from_marker("15000a"), None); // non-digit
assert_eq!(extract_time_from_marker(""), None);
}
#[test]
fn test_extract_time_from_marker_invalid_values() {
assert_eq!(extract_time_from_marker("240000"), None); // hour 24
assert_eq!(extract_time_from_marker("156000"), None); // minute 60
// Note: chrono allows leap seconds (60), so 150060 is valid
}
#[test]
fn test_no_markers_inherits_datetime() {
let inherited = Utc.with_ymd_and_hms(2025, 1, 2, 3, 4, 5).unwrap();
assert_eq!(extract_datetime_from_marker_list(&[], inherited), inherited);
}
#[test]
fn test_unrelated_markers_inherits_datetime() {
let inherited = Utc.with_ymd_and_hms(2025, 1, 2, 3, 4, 5).unwrap();
let markers: Vec<String> = vec![
"not-a-marker".to_string(),
"2025-01-01".to_string(),
"1500".to_string(),
"1234567".to_string(),
];
assert_eq!(
extract_datetime_from_marker_list(&markers, inherited),
inherited
);
}
#[test]
fn test_date_only_marker_sets_midnight() {
let inherited = Utc.with_ymd_and_hms(2025, 6, 7, 8, 9, 10).unwrap();
let markers = vec!["20250101".to_string()];
assert_eq!(
extract_datetime_from_marker_list(&markers, inherited),
Utc.with_ymd_and_hms(2025, 1, 1, 0, 0, 0).unwrap()
);
}
#[test]
fn test_time_only_marker_inherits_date() {
let inherited = Utc.with_ymd_and_hms(2025, 6, 7, 8, 9, 10).unwrap();
let markers = vec!["150000".to_string()];
assert_eq!(
extract_datetime_from_marker_list(&markers, inherited),
Utc.with_ymd_and_hms(2025, 6, 7, 15, 0, 0).unwrap()
);
}
#[test]
fn test_datetime_marker_overrides_both_date_and_time() {
let inherited = Utc.with_ymd_and_hms(2025, 6, 7, 8, 9, 10).unwrap();
let markers = vec!["20250101150000".to_string()];
assert_eq!(
extract_datetime_from_marker_list(&markers, inherited),
Utc.with_ymd_and_hms(2025, 1, 1, 15, 0, 0).unwrap()
);
}
#[test]
fn test_combined_date_and_time_markers() {
let inherited = Utc.with_ymd_and_hms(2025, 6, 7, 8, 9, 10).unwrap();
let markers = vec!["20250101".to_string(), "150000".to_string()];
assert_eq!(
extract_datetime_from_marker_list(&markers, inherited),
Utc.with_ymd_and_hms(2025, 1, 1, 15, 0, 0).unwrap()
);
}
#[test]
fn test_first_marker_wins_when_multiple_dates_or_times() {
let inherited = Utc.with_ymd_and_hms(2025, 6, 7, 8, 9, 10).unwrap();
let markers = vec![
"20250101".to_string(),
"150000".to_string(),
"20250102".to_string(),
"160000".to_string(),
];
assert_eq!(
extract_datetime_from_marker_list(&markers, inherited),
Utc.with_ymd_and_hms(2025, 1, 1, 15, 0, 0).unwrap()
);
}
#[test]
fn test_last_separated_date_and_time_win() {
let inherited = Utc.with_ymd_and_hms(2025, 6, 7, 8, 9, 10).unwrap();
let markers = vec![
"20250101".to_string(),
"150000".to_string(),
"20250102160000".to_string(),
];
// The first date (20250101) and first time (150000) should win over the later combined datetime
assert_eq!(
extract_datetime_from_marker_list(&markers, inherited),
Utc.with_ymd_and_hms(2025, 1, 1, 15, 0, 0).unwrap()
);
}
#[test]
fn test_invalid_date_or_time_markers_are_ignored() {
let inherited = Utc.with_ymd_and_hms(2025, 6, 7, 8, 9, 10).unwrap();
let markers = vec![
"20251301".to_string(), // invalid month
"240000".to_string(), // invalid hour
"20250101".to_string(), // valid
"150000".to_string(), // valid
];
assert_eq!(
extract_datetime_from_marker_list(&markers, inherited),
Utc.with_ymd_and_hms(2025, 1, 1, 15, 0, 0).unwrap()
);
}
}

15
src/localize/mod.rs Normal file
View file

@ -0,0 +1,15 @@
mod configuration;
mod datetime;
mod preconfigured;
mod shard;
pub use configuration::{
merge_dimensions, merge_markers, merge_repository_configuration, merge_single_dimension,
merge_single_marker,
};
pub use datetime::{
extract_date_from_marker, extract_datetime_from_file_name, extract_datetime_from_marker,
extract_datetime_from_marker_list, extract_time_from_marker,
};
pub use preconfigured::TaskConfiguration;
pub use shard::{localize_shard, localize_stream_file};

View file

@ -0,0 +1,46 @@
use once_cell::sync::Lazy;
use crate::models::{Dimension, Marker, MarkerPlacement, RepositoryConfiguration};
/// Pre-configured repository configuration for task management.
#[allow(non_upper_case_globals)]
pub static TaskConfiguration: Lazy<RepositoryConfiguration> = Lazy::new(|| {
RepositoryConfiguration::new()
.with_dimension(
"task",
Dimension::new("Task")
.with_comment(
"If placed, the given shard is a task. The placement determines the state.",
)
.with_propagate(false),
)
.with_dimension(
"project",
Dimension::new("Project")
.with_comment("Project the task is attached to")
.with_propagate(true),
)
.with_marker(
"Task",
Marker::new("Task").with_placements(vec![
MarkerPlacement::new("task").with_value("open"),
MarkerPlacement::new("task")
.with_if_with(vec!["Done"])
.with_value("done"),
MarkerPlacement::new("task")
.with_if_with(vec!["Waiting"])
.with_value("waiting"),
MarkerPlacement::new("task")
.with_if_with(vec!["Cancelled"])
.with_value("cancelled"),
MarkerPlacement::new("task")
.with_if_with(vec!["NotDone"])
.with_value("cancelled"),
]),
)
.with_marker(
"WaitingFor",
Marker::new("Task")
.with_placements(vec![MarkerPlacement::new("task").with_value("waiting")]),
)
});

282
src/localize/shard.rs Normal file
View file

@ -0,0 +1,282 @@
use chrono::{DateTime, Utc};
use indexmap::{IndexMap, IndexSet};
use crate::error::StreamdError;
use crate::models::{LocalizedShard, RepositoryConfiguration, Shard, StreamFile};
use super::datetime::{extract_datetime_from_file_name, extract_datetime_from_marker_list};
/// Localize a shard within the repository's coordinate system.
///
/// This function:
/// 1. Extracts datetime from markers
/// 2. Applies marker placements to determine dimensional position
/// 3. Propagates dimensional values to children based on dimension configuration
pub fn localize_shard(
shard: &Shard,
config: &RepositoryConfiguration,
propagated: &IndexMap<String, String>,
moment: DateTime<Utc>,
) -> LocalizedShard {
let mut position = propagated.clone();
let mut private_position: IndexMap<String, String> = IndexMap::new();
// Extract datetime from markers
let adjusted_moment = extract_datetime_from_marker_list(&shard.markers, moment);
// Convert markers to a set for if_with checking
let marker_set: IndexSet<String> = shard.markers.iter().cloned().collect();
// Process each marker and its placements
for marker in &shard.markers {
if let Some(marker_def) = config.markers.get(marker) {
for placement in &marker_def.placements {
// Check if_with condition
if !placement.if_with.is_subset(&marker_set) {
continue;
}
// Get the dimension configuration
let dimension = match config.dimensions.get(&placement.dimension) {
Some(d) => d,
None => continue,
};
let value = placement.value.clone().unwrap_or_else(|| marker.clone());
// Check if we should place the value
let should_place = placement.overwrites
|| (!position.contains_key(&placement.dimension)
&& !private_position.contains_key(&placement.dimension));
if should_place {
if dimension.propagate {
position.insert(placement.dimension.clone(), value);
} else {
private_position.insert(placement.dimension.clone(), value);
}
}
}
}
}
// Recursively localize children with propagated position
let children: Vec<LocalizedShard> = shard
.children
.iter()
.map(|child| localize_shard(child, config, &position, adjusted_moment))
.collect();
// Merge private position into final position
position.extend(private_position);
LocalizedShard {
markers: shard.markers.clone(),
tags: shard.tags.clone(),
start_line: shard.start_line,
end_line: shard.end_line,
moment: adjusted_moment,
location: position,
children,
}
}
/// Localize an entire stream file.
///
/// Extracts the datetime from the file name and localizes the root shard.
pub fn localize_stream_file(
stream_file: &StreamFile,
config: &RepositoryConfiguration,
) -> Result<LocalizedShard, StreamdError> {
let shard_date = extract_datetime_from_file_name(&stream_file.file_name)
.ok_or_else(|| StreamdError::DateExtractionError(stream_file.file_name.clone()))?;
let shard = stream_file
.shard
.as_ref()
.ok_or_else(|| StreamdError::DateExtractionError("No shard in file".to_string()))?;
let mut initial_location = IndexMap::new();
initial_location.insert("file".to_string(), stream_file.file_name.clone());
Ok(localize_shard(shard, config, &initial_location, shard_date))
}
#[cfg(test)]
mod tests {
use super::*;
use crate::models::{Dimension, Marker, MarkerPlacement};
use chrono::TimeZone;
fn make_config() -> RepositoryConfiguration {
RepositoryConfiguration::new()
.with_dimension(
"project",
Dimension::new("Project")
.with_comment("GTD Project that is being worked on")
.with_propagate(true),
)
.with_dimension(
"moment",
Dimension::new("Moment")
.with_comment("Timestamp this entry was created at")
.with_propagate(true),
)
.with_dimension(
"timesheet",
Dimension::new("Timesheet")
.with_comment("Time Cards for Time Tracking")
.with_propagate(true),
)
.with_marker(
"Streamd",
Marker::new("Streamd").with_placements(vec![
MarkerPlacement::new("project"),
MarkerPlacement::new("timesheet")
.with_if_with(vec!["Timesheet"])
.with_value("coding"),
]),
)
.with_marker(
"JobHunting",
Marker::new("JobHunting").with_placements(vec![MarkerPlacement::new("project")]),
)
}
#[test]
fn test_project_simple_stream_file() {
let config = make_config();
let stream_file = StreamFile::new("20250622-121000 Test File.md")
.with_shard(Shard::new(1, 1).with_markers(vec!["Streamd".to_string()]));
let result = localize_stream_file(&stream_file, &config).unwrap();
assert_eq!(
result.moment,
Utc.with_ymd_and_hms(2025, 6, 22, 12, 10, 0).unwrap()
);
assert_eq!(result.markers, vec!["Streamd"]);
assert_eq!(result.location.get("project"), Some(&"Streamd".to_string()));
assert_eq!(
result.location.get("file"),
Some(&stream_file.file_name.clone())
);
}
#[test]
fn test_timesheet_use_case() {
let config = make_config();
let stream_file = StreamFile::new("20260131-210000 Test File.md").with_shard(
Shard::new(1, 1).with_markers(vec!["Timesheet".to_string(), "Streamd".to_string()]),
);
let result = localize_stream_file(&stream_file, &config).unwrap();
assert_eq!(
result.moment,
Utc.with_ymd_and_hms(2026, 1, 31, 21, 0, 0).unwrap()
);
assert_eq!(result.location.get("project"), Some(&"Streamd".to_string()));
assert_eq!(
result.location.get("timesheet"),
Some(&"coding".to_string())
);
}
#[test]
fn test_overwrites_true_propagated_dimension_overwrites_existing_value() {
let config = RepositoryConfiguration::new()
.with_dimension("project", Dimension::new("Project").with_propagate(true))
.with_marker(
"A",
Marker::new("A")
.with_placements(vec![MarkerPlacement::new("project").with_value("a")]),
)
.with_marker(
"B",
Marker::new("B").with_placements(vec![MarkerPlacement::new("project")
.with_value("b")
.with_overwrites(true)]),
);
let stream_file = StreamFile::new("20260131-210000 Test File.md")
.with_shard(Shard::new(1, 1).with_markers(vec!["A".to_string(), "B".to_string()]));
let result = localize_stream_file(&stream_file, &config).unwrap();
assert_eq!(result.location.get("project"), Some(&"b".to_string()));
}
#[test]
fn test_overwrites_false_propagated_dimension_does_not_overwrite_existing_value() {
let config = RepositoryConfiguration::new()
.with_dimension("project", Dimension::new("Project").with_propagate(true))
.with_marker(
"A",
Marker::new("A")
.with_placements(vec![MarkerPlacement::new("project").with_value("a")]),
)
.with_marker(
"B",
Marker::new("B").with_placements(vec![MarkerPlacement::new("project")
.with_value("b")
.with_overwrites(false)]),
);
let stream_file = StreamFile::new("20260131-210000 Test File.md")
.with_shard(Shard::new(1, 1).with_markers(vec!["A".to_string(), "B".to_string()]));
let result = localize_stream_file(&stream_file, &config).unwrap();
assert_eq!(result.location.get("project"), Some(&"a".to_string()));
}
#[test]
fn test_overwrites_true_non_propagated_dimension_overwrites_private_value() {
let config = RepositoryConfiguration::new()
.with_dimension("label", Dimension::new("Label").with_propagate(false))
.with_marker(
"A",
Marker::new("A")
.with_placements(vec![MarkerPlacement::new("label").with_value("a")]),
)
.with_marker(
"B",
Marker::new("B").with_placements(vec![MarkerPlacement::new("label")
.with_value("b")
.with_overwrites(true)]),
);
let stream_file = StreamFile::new("20260131-210000 Test File.md")
.with_shard(Shard::new(1, 1).with_markers(vec!["A".to_string(), "B".to_string()]));
let result = localize_stream_file(&stream_file, &config).unwrap();
assert_eq!(result.location.get("label"), Some(&"b".to_string()));
}
#[test]
fn test_overwrites_false_non_propagated_dimension_does_not_overwrite_private_value() {
let config = RepositoryConfiguration::new()
.with_dimension("label", Dimension::new("Label").with_propagate(false))
.with_marker(
"A",
Marker::new("A").with_placements(vec![MarkerPlacement::new("label")
.with_value("a")
.with_overwrites(true)]),
)
.with_marker(
"B",
Marker::new("B").with_placements(vec![MarkerPlacement::new("label")
.with_value("b")
.with_overwrites(false)]),
);
let stream_file = StreamFile::new("20260131-210000 Test File.md")
.with_shard(Shard::new(1, 1).with_markers(vec!["A".to_string(), "B".to_string()]));
let result = localize_stream_file(&stream_file, &config).unwrap();
assert_eq!(result.location.get("label"), Some(&"a".to_string()));
}
}

19
src/main.rs Normal file
View file

@ -0,0 +1,19 @@
use clap::Parser;
use streamd::cli::{Cli, Commands};
fn main() -> miette::Result<()> {
let cli = Cli::parse();
match cli.command {
Some(Commands::New) => streamd::cli::commands::new::run()?,
Some(Commands::Todo) => streamd::cli::commands::todo::run()?,
Some(Commands::Edit { number }) => streamd::cli::commands::edit::run(number)?,
Some(Commands::Timesheet) => streamd::cli::commands::timesheet::run()?,
Some(Commands::Completions { shell }) => {
streamd::cli::commands::completions::run(shell);
}
None => streamd::cli::commands::new::run()?,
}
Ok(())
}

42
src/models/dimension.rs Normal file
View file

@ -0,0 +1,42 @@
use serde::{Deserialize, Serialize};
/// A Dimension represents an axis along which shards can be categorized.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct Dimension {
/// Human-readable name for display purposes.
pub display_name: String,
/// Optional description of what this dimension represents.
#[serde(default)]
pub comment: Option<String>,
/// Whether values in this dimension should propagate to child shards.
#[serde(default)]
pub propagate: bool,
/// Tracks whether 'propagate' was explicitly set (for merge semantics).
#[serde(skip)]
pub propagate_was_set: bool,
}
impl Dimension {
pub fn new(display_name: impl Into<String>) -> Self {
Self {
display_name: display_name.into(),
comment: None,
propagate: false,
propagate_was_set: false,
}
}
pub fn with_comment(mut self, comment: impl Into<String>) -> Self {
self.comment = Some(comment.into());
self
}
pub fn with_propagate(mut self, propagate: bool) -> Self {
self.propagate = propagate;
self.propagate_was_set = true;
self
}
}

View file

@ -0,0 +1,63 @@
use chrono::{DateTime, Utc};
use indexmap::IndexMap;
use serde::{Deserialize, Serialize};
/// A LocalizedShard extends a Shard with temporal and dimensional context.
/// It represents a shard that has been placed within the repository's coordinate system.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct LocalizedShard {
/// Markers are tags that appear at the beginning of a line before any content.
pub markers: Vec<String>,
/// Tags are @-prefixed identifiers that appear after content has started.
pub tags: Vec<String>,
/// The starting line number in the source file (1-indexed).
pub start_line: usize,
/// The ending line number in the source file (1-indexed).
pub end_line: usize,
/// The moment in time this shard is associated with.
pub moment: DateTime<Utc>,
/// The dimensional location of this shard (dimension name -> value).
pub location: IndexMap<String, String>,
/// Child shards nested within this shard.
pub children: Vec<LocalizedShard>,
}
impl LocalizedShard {
pub fn new(start_line: usize, end_line: usize, moment: DateTime<Utc>) -> Self {
Self {
markers: Vec::new(),
tags: Vec::new(),
start_line,
end_line,
moment,
location: IndexMap::new(),
children: Vec::new(),
}
}
pub fn with_markers(mut self, markers: Vec<String>) -> Self {
self.markers = markers;
self
}
pub fn with_tags(mut self, tags: Vec<String>) -> Self {
self.tags = tags;
self
}
pub fn with_location(mut self, location: IndexMap<String, String>) -> Self {
self.location = location;
self
}
pub fn with_children(mut self, children: Vec<LocalizedShard>) -> Self {
self.children = children;
self
}
}

76
src/models/marker.rs Normal file
View file

@ -0,0 +1,76 @@
use indexmap::IndexSet;
use serde::{Deserialize, Serialize};
/// A MarkerPlacement defines how a marker affects dimension values.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct MarkerPlacement {
/// Only apply this placement if all markers in `if_with` are also present.
#[serde(default)]
pub if_with: IndexSet<String>,
/// The dimension to place a value in.
pub dimension: String,
/// The value to place. If None, uses the marker name itself.
#[serde(default)]
pub value: Option<String>,
/// Whether this placement should overwrite existing values in the dimension.
#[serde(default = "default_overwrites")]
pub overwrites: bool,
}
fn default_overwrites() -> bool {
true
}
impl MarkerPlacement {
pub fn new(dimension: impl Into<String>) -> Self {
Self {
if_with: IndexSet::new(),
dimension: dimension.into(),
value: None,
overwrites: true,
}
}
pub fn with_if_with(mut self, if_with: impl IntoIterator<Item = impl Into<String>>) -> Self {
self.if_with = if_with.into_iter().map(Into::into).collect();
self
}
pub fn with_value(mut self, value: impl Into<String>) -> Self {
self.value = Some(value.into());
self
}
pub fn with_overwrites(mut self, overwrites: bool) -> Self {
self.overwrites = overwrites;
self
}
}
/// A Marker defines how an @-tag should be interpreted for dimensional placement.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct Marker {
/// Human-readable name for display purposes.
pub display_name: String,
/// The dimensional placements this marker creates.
#[serde(default)]
pub placements: Vec<MarkerPlacement>,
}
impl Marker {
pub fn new(display_name: impl Into<String>) -> Self {
Self {
display_name: display_name.into(),
placements: Vec::new(),
}
}
pub fn with_placements(mut self, placements: Vec<MarkerPlacement>) -> Self {
self.placements = placements;
self
}
}

11
src/models/mod.rs Normal file
View file

@ -0,0 +1,11 @@
mod dimension;
mod localized_shard;
mod marker;
mod shard;
mod timecard;
pub use dimension::Dimension;
pub use localized_shard::LocalizedShard;
pub use marker::{Marker, MarkerPlacement};
pub use shard::{RepositoryConfiguration, Shard, StreamFile};
pub use timecard::{SpecialDayType, Timecard, Timesheet};

115
src/models/shard.rs Normal file
View file

@ -0,0 +1,115 @@
use indexmap::IndexMap;
use serde::{Deserialize, Serialize};
use super::{Dimension, Marker};
/// A Shard represents a section of a markdown file that may contain markers and tags.
/// Shards form a tree structure where children inherit context from their parents.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct Shard {
/// Markers are tags that appear at the beginning of a line before any content.
/// They define dimensional placement for the shard.
#[serde(default)]
pub markers: Vec<String>,
/// Tags are @-prefixed identifiers that appear after content has started.
/// They are informational but don't affect dimensional placement.
#[serde(default)]
pub tags: Vec<String>,
/// The starting line number in the source file (1-indexed).
pub start_line: usize,
/// The ending line number in the source file (1-indexed).
pub end_line: usize,
/// Child shards nested within this shard.
#[serde(default)]
pub children: Vec<Shard>,
}
impl Shard {
pub fn new(start_line: usize, end_line: usize) -> Self {
Self {
markers: Vec::new(),
tags: Vec::new(),
start_line,
end_line,
children: Vec::new(),
}
}
pub fn with_markers(mut self, markers: Vec<String>) -> Self {
self.markers = markers;
self
}
pub fn with_tags(mut self, tags: Vec<String>) -> Self {
self.tags = tags;
self
}
pub fn with_children(mut self, children: Vec<Shard>) -> Self {
self.children = children;
self
}
}
/// A StreamFile represents a parsed markdown file with its associated shard tree.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct StreamFile {
/// The file name or path of the source file.
pub file_name: String,
/// The root shard representing the entire file's content structure.
pub shard: Option<Shard>,
}
impl StreamFile {
pub fn new(file_name: impl Into<String>) -> Self {
Self {
file_name: file_name.into(),
shard: None,
}
}
pub fn with_shard(mut self, shard: Shard) -> Self {
self.shard = Some(shard);
self
}
}
/// Repository configuration defines the dimensions and markers used to organize shards.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct RepositoryConfiguration {
/// Dimensions define the axes along which shards can be positioned.
pub dimensions: IndexMap<String, Dimension>,
/// Markers define how @-tags map to dimension placements.
pub markers: IndexMap<String, Marker>,
}
impl RepositoryConfiguration {
pub fn new() -> Self {
Self {
dimensions: IndexMap::new(),
markers: IndexMap::new(),
}
}
pub fn with_dimension(mut self, name: impl Into<String>, dimension: Dimension) -> Self {
self.dimensions.insert(name.into(), dimension);
self
}
pub fn with_marker(mut self, name: impl Into<String>, marker: Marker) -> Self {
self.markers.insert(name.into(), marker);
self
}
}
impl Default for RepositoryConfiguration {
fn default() -> Self {
Self::new()
}
}

77
src/models/timecard.rs Normal file
View file

@ -0,0 +1,77 @@
use chrono::NaiveDate;
use chrono::NaiveTime;
use serde::{Deserialize, Serialize};
/// Type of special day that affects timesheet calculations.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum SpecialDayType {
#[serde(rename = "VACATION")]
Vacation,
#[serde(rename = "UNDERTIME")]
Undertime,
#[serde(rename = "HOLIDAY")]
Holiday,
#[serde(rename = "WEEKEND")]
Weekend,
}
impl std::fmt::Display for SpecialDayType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
SpecialDayType::Vacation => write!(f, "VACATION"),
SpecialDayType::Undertime => write!(f, "UNDERTIME"),
SpecialDayType::Holiday => write!(f, "HOLIDAY"),
SpecialDayType::Weekend => write!(f, "WEEKEND"),
}
}
}
/// A Timecard represents a single work period with start and end times.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct Timecard {
pub from_time: NaiveTime,
pub to_time: NaiveTime,
}
impl Timecard {
pub fn new(from_time: NaiveTime, to_time: NaiveTime) -> Self {
Self { from_time, to_time }
}
}
/// A Timesheet aggregates all time tracking information for a single day.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct Timesheet {
pub date: NaiveDate,
#[serde(default)]
pub is_sick_leave: bool,
#[serde(default)]
pub special_day_type: Option<SpecialDayType>,
pub timecards: Vec<Timecard>,
}
impl Timesheet {
pub fn new(date: NaiveDate) -> Self {
Self {
date,
is_sick_leave: false,
special_day_type: None,
timecards: Vec::new(),
}
}
pub fn with_sick_leave(mut self, is_sick_leave: bool) -> Self {
self.is_sick_leave = is_sick_leave;
self
}
pub fn with_special_day_type(mut self, special_day_type: SpecialDayType) -> Self {
self.special_day_type = Some(special_day_type);
self
}
pub fn with_timecards(mut self, timecards: Vec<Timecard>) -> Self {
self.timecards = timecards;
self
}
}

209
src/query/find.rs Normal file
View file

@ -0,0 +1,209 @@
use crate::models::LocalizedShard;
/// Find all shards matching a predicate, recursively searching through children.
///
/// The search is depth-first, with the parent tested before its children.
pub fn find_shard<F>(shards: &[LocalizedShard], predicate: F) -> Vec<LocalizedShard>
where
F: Fn(&LocalizedShard) -> bool + Copy,
{
let mut found_shards = Vec::new();
for shard in shards {
if predicate(shard) {
found_shards.push(shard.clone());
}
found_shards.extend(find_shard(&shard.children, predicate));
}
found_shards
}
/// Find all shards where a specific dimension has a specific value.
pub fn find_shard_by_position(
shards: &[LocalizedShard],
dimension: &str,
value: &str,
) -> Vec<LocalizedShard> {
find_shard(shards, |shard| {
shard
.location
.get(dimension)
.map(|v| v == value)
.unwrap_or(false)
})
}
/// Find all shards where a specific dimension is set (regardless of value).
pub fn find_shard_by_set_dimension(
shards: &[LocalizedShard],
dimension: &str,
) -> Vec<LocalizedShard> {
find_shard(shards, |shard| shard.location.contains_key(dimension))
}
#[cfg(test)]
mod tests {
use super::*;
use chrono::{TimeZone, Utc};
use indexmap::IndexMap;
fn generate_localized_shard(
location: Option<IndexMap<String, String>>,
children: Option<Vec<LocalizedShard>>,
) -> LocalizedShard {
LocalizedShard {
start_line: 1,
end_line: 1,
moment: Utc.with_ymd_and_hms(2020, 1, 1, 0, 0, 0).unwrap(),
location: location.unwrap_or_default(),
children: children.unwrap_or_default(),
markers: vec![],
tags: vec![],
}
}
#[test]
fn test_returns_empty_when_no_match() {
let mut loc = IndexMap::new();
loc.insert("file".to_string(), "a.md".to_string());
let root = generate_localized_shard(Some(loc), None);
let shards = vec![root];
let result = find_shard(&shards, |s| s.location.contains_key("missing"));
assert!(result.is_empty());
}
#[test]
fn test_finds_matches_depth_first_and_preserves_order() {
let mut loc1 = IndexMap::new();
loc1.insert("k".to_string(), "match".to_string());
let grandchild = generate_localized_shard(Some(loc1.clone()), None);
let child1 = generate_localized_shard(Some(loc1), Some(vec![grandchild.clone()]));
let mut loc2 = IndexMap::new();
loc2.insert("k".to_string(), "nope".to_string());
let child2 = generate_localized_shard(Some(loc2.clone()), None);
let root = generate_localized_shard(Some(loc2), Some(vec![child1.clone(), child2]));
let result = find_shard(&[root], |s| {
s.location.get("k") == Some(&"match".to_string())
});
assert_eq!(result.len(), 2);
assert_eq!(result[0], child1);
assert_eq!(result[1], grandchild);
}
#[test]
fn test_includes_root_if_it_matches() {
let mut loc = IndexMap::new();
loc.insert("k".to_string(), "match".to_string());
let child = generate_localized_shard(Some(loc.clone()), None);
let root = generate_localized_shard(Some(loc), Some(vec![child]));
let result = find_shard(std::slice::from_ref(&root), |s| {
s.location.get("k") == Some(&"match".to_string())
});
assert_eq!(result[0], root);
assert_eq!(result.len(), 2);
}
#[test]
fn test_multiple_roots_keeps_left_to_right_order() {
let mut loc_match = IndexMap::new();
loc_match.insert("k".to_string(), "match".to_string());
let mut loc_nope = IndexMap::new();
loc_nope.insert("k".to_string(), "nope".to_string());
let a = generate_localized_shard(Some(loc_match.clone()), None);
let b = generate_localized_shard(Some(loc_match), None);
let c = generate_localized_shard(Some(loc_nope), None);
let result = find_shard(&[a.clone(), b.clone(), c], |s| {
s.location.get("k") == Some(&"match".to_string())
});
assert_eq!(result, vec![a, b]);
}
#[test]
fn test_query_function_can_use_arbitrary_logic() {
let mut loc1 = IndexMap::new();
loc1.insert("x".to_string(), "1".to_string());
let mut loc2 = IndexMap::new();
loc2.insert("x".to_string(), "2".to_string());
let mut loc3 = IndexMap::new();
loc3.insert("x".to_string(), "3".to_string());
let a = generate_localized_shard(Some(loc1), None);
let b = generate_localized_shard(Some(loc2), None);
let c = generate_localized_shard(Some(loc3), None);
let root = generate_localized_shard(None, Some(vec![a, b.clone(), c]));
let result = find_shard(&[root], |shard| {
shard
.location
.get("x")
.and_then(|x| x.parse::<i32>().ok())
.map(|x| x % 2 == 0)
.unwrap_or(false)
});
assert_eq!(result, vec![b]);
}
#[test]
fn test_matches_only_when_dimension_present_and_equal() {
let mut loc_match = IndexMap::new();
loc_match.insert("file".to_string(), "a.md".to_string());
loc_match.insert("line".to_string(), "10".to_string());
let mut loc_wrong = IndexMap::new();
loc_wrong.insert("file".to_string(), "a.md".to_string());
loc_wrong.insert("line".to_string(), "11".to_string());
let mut loc_missing = IndexMap::new();
loc_missing.insert("file".to_string(), "a.md".to_string());
let match_shard = generate_localized_shard(Some(loc_match), None);
let wrong_value = generate_localized_shard(Some(loc_wrong), None);
let missing_dim = generate_localized_shard(Some(loc_missing), None);
let mut root_loc = IndexMap::new();
root_loc.insert("root".to_string(), "x".to_string());
let root = generate_localized_shard(
Some(root_loc),
Some(vec![match_shard.clone(), wrong_value, missing_dim]),
);
let result = find_shard_by_position(&[root], "line", "10");
assert_eq!(result, vec![match_shard]);
}
#[test]
fn test_recurses_through_children() {
let mut loc_deep = IndexMap::new();
loc_deep.insert("section".to_string(), "s1".to_string());
let deep = generate_localized_shard(Some(loc_deep), None);
let mut loc_mid = IndexMap::new();
loc_mid.insert("section".to_string(), "s0".to_string());
let mid = generate_localized_shard(Some(loc_mid), Some(vec![deep.clone()]));
let root = generate_localized_shard(None, Some(vec![mid]));
let result = find_shard_by_position(&[root], "section", "s1");
assert_eq!(result, vec![deep]);
}
}

3
src/query/mod.rs Normal file
View file

@ -0,0 +1,3 @@
mod find;
pub use find::{find_shard, find_shard_by_position, find_shard_by_set_dimension};

View file

@ -1,126 +0,0 @@
import glob
import os
from collections.abc import Generator
from datetime import datetime
from shutil import move
from typing import Annotated
import click
import typer
from rich import print
from rich.markdown import Markdown
from rich.panel import Panel
from streamd.localize import (
LocalizedShard,
RepositoryConfiguration,
localize_stream_file,
)
from streamd.localize.preconfigured_configurations import TaskConfiguration
from streamd.parse import parse_markdown_file
from streamd.query import find_shard_by_position
from streamd.settings import Settings
from streamd.timesheet.configuration import BasicTimesheetConfiguration
from streamd.timesheet.extract import extract_timesheets
app = typer.Typer()
def all_files(config: RepositoryConfiguration) -> Generator[LocalizedShard]:
for file_name in glob.glob(f"{glob.escape(Settings().base_folder)}/*.md"):
with open(file_name, "r") as file:
file_content = file.read()
if shard := localize_stream_file(
parse_markdown_file(file_name, file_content), config
):
yield shard
@app.command()
def todo() -> None:
all_shards = list(all_files(TaskConfiguration))
for task_shard in find_shard_by_position(all_shards, "task", "open"):
with open(task_shard.location["file"], "r") as file:
file_content = file.read().splitlines()
print(
Panel(
Markdown(
"\n".join(
file_content[
task_shard.start_line - 1 : task_shard.end_line
]
)
),
title=f"{task_shard.location['file']}:{task_shard.start_line}",
)
)
@app.command()
def edit(number: Annotated[int, typer.Argument()] = 1) -> None:
all_shards = list(all_files(TaskConfiguration))
sorted_shards = sorted(all_shards, key=lambda s: s.moment)
if abs(number) >= len(sorted_shards):
raise ValueError("Argument out of range")
selected_number = number
if selected_number >= 0:
selected_number = len(sorted_shards) - selected_number
else:
selected_number = -selected_number
click.edit(None, filename=sorted_shards[selected_number].location["file"])
@app.command()
def timesheet() -> None:
all_shards = list(all_files(BasicTimesheetConfiguration))
sheets = sorted(extract_timesheets(all_shards), key=lambda card: card.date)
for sheet in sheets:
print(sheet.date)
print(
",".join(
map(lambda card: f"{card.from_time},{card.to_time}", sheet.timecards)
),
)
@app.command()
def new() -> None:
streamd_directory = Settings().base_folder
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
preliminary_file_name = f"{timestamp}_wip.md"
prelimary_path = os.path.join(streamd_directory, preliminary_file_name)
content = "# "
with open(prelimary_path, "w") as file:
_ = file.write(content)
click.edit(None, filename=prelimary_path)
with open(prelimary_path, "r") as file:
content = file.read()
parsed_content = parse_markdown_file(prelimary_path, content)
final_file_name = f"{timestamp}.md"
if parsed_content.shard is not None and len(
markers := parsed_content.shard.markers
):
final_file_name = f"{timestamp} {' '.join(markers)}.md"
final_path = os.path.join(streamd_directory, final_file_name)
_ = move(prelimary_path, final_path)
print(f"Saved as [yellow]{final_file_name}")
@app.callback(invoke_without_command=True)
def main(ctx: typer.Context):
if ctx.invoked_subcommand is None:
new()
if __name__ == "__main__":
app()

View file

@ -1,9 +0,0 @@
from .localize import localize_stream_file
from .localized_shard import LocalizedShard
from .repository_configuration import RepositoryConfiguration
__all__ = [
"RepositoryConfiguration",
"localize_stream_file",
"LocalizedShard",
]

View file

@ -1,92 +0,0 @@
import os
import re
from datetime import date, datetime, time
def extract_datetime_from_file_name(file_name: str) -> datetime | None:
FILE_NAME_REGEX = r"^(?P<date>\d{8})(?:-(?P<time>\d{4,6}))?.+.md$"
base_name = os.path.basename(file_name)
match = re.match(FILE_NAME_REGEX, base_name)
if match:
date_str = match.group("date")
time_str = match.group("time") or ""
time_str = time_str.ljust(6, "0")
datetime_str = f"{date_str} {time_str[:2]}:{time_str[2:4]}:{time_str[4:]}"
return datetime.strptime(datetime_str, "%Y%m%d %H:%M:%S")
return None
def extract_datetime_from_marker(marker: str) -> datetime | None:
"""
Extract a datetime from a marker string in the exact format: YYYYMMDDHHMMSS.
Returns:
Parsed datetime if the format is fulfilled and values are valid, else None.
"""
if not re.fullmatch(r"\d{14}", marker or ""):
return None
try:
return datetime.strptime(marker, "%Y%m%d%H%M%S")
except ValueError:
return None
def extract_date_from_marker(marker: str) -> date | None:
"""
Extract a date from a marker string in the exact format: YYYYMMDD.
Returns:
Parsed date if the format is fulfilled and values are valid, else None.
"""
if not re.fullmatch(r"\d{8}", marker or ""):
return None
try:
return datetime.strptime(marker, "%Y%m%d").date()
except ValueError:
return None
def extract_time_from_marker(marker: str) -> time | None: # noqa: F821
"""
Extract a time from a marker string in the exact format: HHMMSS.
Returns:
Parsed time if the format is fulfilled and values are valid, else None.
"""
if not re.fullmatch(r"\d{6}", marker or ""):
return None
try:
return datetime.strptime(marker, "%H%M%S").time()
except ValueError:
return None
def extract_datetime_from_marker_list(markers: list[str], inherited_datetime: datetime):
shard_time: time | None = None
shard_date: date | None = None
for marker in markers[::-1]:
if parsed_time := extract_time_from_marker(marker):
shard_time = parsed_time
if parsed_date := extract_date_from_marker(marker):
shard_date = parsed_date
if parsed_datetime := extract_datetime_from_marker(marker):
shard_date = parsed_datetime.date()
shard_time = parsed_datetime.time()
if shard_date and not shard_time:
return datetime.combine(shard_date, time(0, 0, 0))
return datetime.combine(
shard_date or inherited_datetime.date(), shard_time or inherited_datetime.time()
)
__all__ = [
"extract_datetime_from_file_name",
"extract_datetime_from_marker",
"extract_date_from_marker",
"extract_time_from_marker",
"extract_datetime_from_marker_list",
]

View file

@ -1,73 +0,0 @@
from datetime import datetime
from streamd.parse.shard import Shard, StreamFile
from .extract_datetime import (
extract_datetime_from_file_name,
extract_datetime_from_marker_list,
)
from .localized_shard import LocalizedShard
from .repository_configuration import RepositoryConfiguration
def localize_shard(
shard: Shard,
config: RepositoryConfiguration,
propagated: dict[str, str],
moment: datetime,
) -> LocalizedShard:
position = {**propagated}
private_position: dict[str, str] = {}
adjusted_moment: datetime = extract_datetime_from_marker_list(shard.markers, moment)
for marker in shard.markers:
if marker in config.markers:
marker_definition = config.markers[marker]
for placement in marker_definition.placements:
if placement.if_with <= set(shard.markers):
dimension = config.dimensions[placement.dimension]
value = placement.value or marker
if placement.overwrites or (
placement.dimension not in position
and placement.dimension not in private_position
):
if dimension.propagate:
position[placement.dimension] = value
else:
private_position[placement.dimension] = value
children = [
localize_shard(child, config, position, adjusted_moment)
for child in shard.children
]
position.update(private_position)
return LocalizedShard(
markers=shard.markers,
tags=shard.tags,
start_line=shard.start_line,
end_line=shard.end_line,
location=position,
children=children,
moment=adjusted_moment,
)
def localize_stream_file(
stream_file: StreamFile, config: RepositoryConfiguration
) -> LocalizedShard | None:
shard_date = extract_datetime_from_file_name(stream_file.file_name)
if not shard_date or not stream_file.shard:
raise ValueError("Could not extract date")
return localize_shard(
stream_file.shard, config, {"file": stream_file.file_name}, shard_date
)
__all__ = ["localize_stream_file"]

View file

@ -1,14 +0,0 @@
from __future__ import annotations
from datetime import datetime
from streamd.parse.shard import Shard
class LocalizedShard(Shard):
moment: datetime
location: dict[str, str]
children: list[LocalizedShard] = [] # pyright: ignore[reportIncompatibleVariableOverride]
__all__ = ["LocalizedShard"]

View file

@ -1,43 +0,0 @@
from streamd.localize.repository_configuration import (
Dimension,
Marker,
MarkerPlacement,
RepositoryConfiguration,
)
TaskConfiguration = RepositoryConfiguration(
dimensions={
"task": Dimension(
display_name="Task",
comment="If placed, the given shard is a task. The placement determines the state.",
propagate=False,
),
"project": Dimension(
display_name="Project",
comment="Project the task is attached to",
propagate=True,
),
},
markers={
"Task": Marker(
display_name="Task",
placements=[
MarkerPlacement(dimension="task", value="open"),
MarkerPlacement(if_with={"Done"}, dimension="task", value="done"),
MarkerPlacement(if_with={"Waiting"}, dimension="task", value="waiting"),
MarkerPlacement(
if_with={"Cancelled"}, dimension="task", value="cancelled"
),
MarkerPlacement(
if_with={"NotDone"}, dimension="task", value="cancelled"
),
],
),
"WaitingFor": Marker(
display_name="Task",
placements=[
MarkerPlacement(dimension="task", value="waiting"),
],
),
},
)

View file

@ -1,106 +0,0 @@
from __future__ import annotations
from pydantic import BaseModel
class Dimension(BaseModel):
display_name: str
comment: str | None = None
propagate: bool = False
class MarkerPlacement(BaseModel):
if_with: set[str] = set()
dimension: str
value: str | None = None
overwrites: bool = True
class Marker(BaseModel):
display_name: str
placements: list[MarkerPlacement] = []
class RepositoryConfiguration(BaseModel):
dimensions: dict[str, Dimension]
markers: dict[str, Marker]
def merge_single_dimension(base: Dimension, second: Dimension) -> Dimension:
second_fields_set: set[str] = getattr(second, "model_fields_set", set())
return Dimension(
display_name=second.display_name or base.display_name,
comment=base.comment if second.comment is None else second.comment,
propagate=second.propagate
if "propagate" in second_fields_set
else base.propagate,
)
def merge_dimensions(
base: dict[str, Dimension], second: dict[str, Dimension]
) -> dict[str, Dimension]:
merged: dict[str, Dimension] = dict(base)
for key, second_dimension in second.items():
if key in merged:
merged[key] = merge_single_dimension(merged[key], second_dimension)
else:
merged[key] = second_dimension
return merged
def _placement_identity(p: MarkerPlacement) -> tuple[frozenset[str], str]:
return (frozenset(p.if_with), p.dimension)
def merge_single_marker(base: Marker, second: Marker) -> Marker:
merged_display_name = second.display_name or base.display_name
merged_placements: list[MarkerPlacement] = []
seen: dict[tuple[frozenset[str], str], int] = {}
for placement in base.placements:
ident = _placement_identity(placement)
seen[ident] = len(merged_placements)
merged_placements.append(placement)
for placement in second.placements:
ident = _placement_identity(placement)
if ident in seen:
merged_placements[seen[ident]] = placement
else:
seen[ident] = len(merged_placements)
merged_placements.append(placement)
return Marker(display_name=merged_display_name, placements=merged_placements)
def merge_markers(
base: dict[str, Marker], second: dict[str, Marker]
) -> dict[str, Marker]:
merged: dict[str, Marker] = dict(base)
for key, second_marker in second.items():
if key in merged:
merged[key] = merge_single_marker(merged[key], second_marker)
else:
merged[key] = second_marker
return merged
def merge_repository_configuration(
base: RepositoryConfiguration, second: RepositoryConfiguration
) -> RepositoryConfiguration:
return RepositoryConfiguration(
dimensions=merge_dimensions(base.dimensions, second.dimensions),
markers=merge_markers(base.markers, second.markers),
)
__all__ = [
"Dimension",
"Marker",
"MarkerPlacement",
"RepositoryConfiguration",
"merge_repository_configuration",
]

View file

@ -1,4 +0,0 @@
from .shard import Shard, StreamFile
from .parse import parse_markdown_file
__all__ = ["Shard", "StreamFile", "parse_markdown_file"]

View file

@ -1,92 +0,0 @@
import re
from collections.abc import Iterable
from typing import cast
from mistletoe.block_token import BlockToken
from mistletoe.span_token import Emphasis, Link, RawText, Strikethrough, Strong
from mistletoe.token import Token
from .markdown_tag import Tag
def extract_markers_and_tags_from_single_token(
token: Token,
marker_boundary_encountered: bool,
return_at_first_marker: bool = False,
) -> tuple[list[str], list[str], bool]:
result_markers: list[str] = []
result_tags: list[str] = []
result_marker_boundary_encountered = marker_boundary_encountered
if isinstance(token, Tag):
content = cast(str, token.content)
if marker_boundary_encountered:
result_tags.append(content)
else:
result_markers.append(content)
elif isinstance(token, (Emphasis, Strong, Strikethrough, Link)):
children = list(token.children or [])
markers, tags, child_marker_boundary_encountered = (
extract_markers_and_tags_from_tokens(
children,
marker_boundary_encountered,
return_at_first_marker,
)
)
result_markers.extend(markers)
result_tags.extend(tags)
result_marker_boundary_encountered = (
marker_boundary_encountered or child_marker_boundary_encountered
)
elif isinstance(token, RawText):
content_raw = cast(str, token.content)
if not re.match(r"^[\s]*$", content_raw):
result_marker_boundary_encountered = True
else:
result_marker_boundary_encountered = True
return result_markers, result_tags, result_marker_boundary_encountered
def extract_markers_and_tags_from_tokens(
tokens: Iterable[Token],
marker_boundary_encountered: bool,
return_at_first_marker: bool = False,
) -> tuple[list[str], list[str], bool]:
result_markers: list[str] = []
result_tags: list[str] = []
result_marker_boundary_encountered = marker_boundary_encountered
for child in tokens:
markers, tags, child_marker_boundary_encountered = (
extract_markers_and_tags_from_single_token(
child, result_marker_boundary_encountered, return_at_first_marker
)
)
result_markers.extend(markers)
result_tags.extend(tags)
result_marker_boundary_encountered = (
marker_boundary_encountered or child_marker_boundary_encountered
)
if len(result_markers) > 0 and return_at_first_marker:
break
return result_markers, result_tags, result_marker_boundary_encountered
def extract_markers_and_tags(block_token: BlockToken) -> tuple[list[str], list[str]]:
children = list(block_token.children or [])
markers, tags, _ = extract_markers_and_tags_from_tokens(children, False)
return markers, tags
def has_markers(block_token: BlockToken) -> bool:
children = list(block_token.children or [])
markers, _, _ = extract_markers_and_tags_from_tokens(
children, False, return_at_first_marker=True
)
return len(markers) > 0
__all__ = ["extract_markers_and_tags", "has_markers"]

View file

@ -1,13 +0,0 @@
from itertools import pairwise
from typing import TypeVar
A = TypeVar("A")
def split_at(list_to_be_split: list[A], positions: list[int]):
positions = sorted(set([0, *positions, len(list_to_be_split)]))
return [list_to_be_split[left:right] for left, right in pairwise(positions)]
__all__ = ["split_at"]

View file

@ -1,23 +0,0 @@
import re
from typing import cast
from mistletoe.markdown_renderer import Fragment, MarkdownRenderer
from mistletoe.span_token import SpanToken
class Tag(SpanToken):
parse_inner: bool = False
pattern: re.Pattern[str] = re.compile(r"@([^\s*\x60~\[\]]+)")
class TagMarkdownRenderer(MarkdownRenderer):
def __init__(self) -> None:
super().__init__(Tag) # pyright: ignore[reportUnknownMemberType]
def render_tag(self, token: Tag):
content = cast(str, token.content)
yield Fragment("@")
yield Fragment(content)
__all__ = ["Tag", "TagMarkdownRenderer"]

View file

@ -1,258 +0,0 @@
from collections import Counter
from typing import cast
from mistletoe.block_token import (
BlockToken,
Document,
Heading,
List,
ListItem,
Paragraph,
)
from .extract_tag import extract_markers_and_tags, has_markers
from .list import split_at
from .markdown_tag import TagMarkdownRenderer
from .shard import Shard, StreamFile
def get_line_number(block_token: BlockToken) -> int:
return cast(int, block_token.line_number) # pyright: ignore[reportAttributeAccessIssue]
def build_shard(
start_line: int,
end_line: int,
markers: list[str] | None = None,
tags: list[str] | None = None,
children: list[Shard] | None = None,
) -> Shard:
markers = markers or []
tags = tags or []
children = children or []
if (
len(children) == 1
and len(tags) == 0
and len(markers) == 0
and children[0].start_line == start_line
and children[0].end_line == end_line
):
return children[0]
return Shard(
markers=markers,
tags=tags,
children=children,
start_line=start_line,
end_line=end_line,
)
def merge_into_first_shard(
shards: list[Shard],
start_line: int,
end_line: int,
additional_tags: list[str] | None = None,
) -> Shard:
return shards[0].model_copy(
update={
"start_line": start_line,
"end_line": end_line,
"children": shards[1:],
"tags": shards[0].tags + (additional_tags or []),
}
)
def find_paragraph_shard_positions(block_tokens: list[BlockToken]) -> list[int]:
return [
index
for index, block_token in enumerate(block_tokens)
if isinstance(block_token, Paragraph) and has_markers(block_token)
]
def _heading_level(heading: Heading) -> int:
return cast(int, heading.level)
def find_headings_by_level(
block_tokens: list[BlockToken], header_level: int
) -> list[int]:
return [
index
for index, block_token in enumerate(block_tokens)
if isinstance(block_token, Heading)
and _heading_level(block_token) == header_level
]
def calculate_heading_level_for_next_split(
block_tokens: list[BlockToken],
) -> int | None:
"""
If there is no marker in any heading, then return None.
If only the first token is a heading with a marker, then return None.
Otherwise: Return the heading level with the lowest level (h1 < h2), of which there are two or which has a marker (and doesn't stem from first)
"""
level_of_headings_without_first_with_marker: list[int] = [
_heading_level(token)
for token in block_tokens[1:]
if isinstance(token, Heading) and has_markers(token)
]
if len(level_of_headings_without_first_with_marker) == 0:
return None
heading_level_counter: Counter[int] = Counter(
[_heading_level(token) for token in block_tokens if isinstance(token, Heading)]
)
return min(
[level for level, count in heading_level_counter.items() if count >= 2]
+ level_of_headings_without_first_with_marker
)
def parse_single_block_shards(
block_token: BlockToken, start_line: int, end_line: int
) -> tuple[Shard | None, list[str]]:
markers: list[str] = []
tags: list[str] = []
children: list[Shard] = []
if isinstance(block_token, List):
list_items: list[ListItem] = ( # pyright: ignore[reportAssignmentType]
list(block_token.children) if block_token.children is not None else []
)
for index, list_item in enumerate(list_items):
list_item_start_line = get_line_number(list_item)
list_item_end_line = (
get_line_number(list_items[index + 1]) - 1
if index + 1 < len(list_items)
else end_line
)
list_item_shard, list_item_tags = parse_multiple_block_shards(
list_item.children, # pyright: ignore[reportArgumentType]
list_item_start_line,
list_item_end_line,
)
if list_item_shard is not None:
children.append(list_item_shard)
tags.extend(list_item_tags)
elif isinstance(block_token, (Paragraph, Heading)):
markers, tags = extract_markers_and_tags(block_token)
if len(markers) == 0 and len(children) == 0:
return None, tags
return build_shard(
start_line, end_line, markers=markers, tags=tags, children=children
), []
def parse_multiple_block_shards(
block_tokens: list[BlockToken],
start_line: int,
end_line: int,
enforce_shard: bool = False,
) -> tuple[Shard | None, list[str]]:
is_first_block_heading = isinstance(block_tokens[0], Heading) and has_markers(
block_tokens[0]
)
paragraph_positions = find_paragraph_shard_positions(block_tokens)
children: list[Shard] = []
tags: list[str] = []
is_first_block_only_with_marker = False
for i, token in enumerate(block_tokens):
if i in paragraph_positions:
is_first_block_only_with_marker = i == 0
child_start_line = get_line_number(token)
child_end_line = (
get_line_number(block_tokens[i + 1]) - 1
if i + 1 < len(block_tokens)
else end_line
)
child_shard, child_tags = parse_single_block_shards(
token, child_start_line, child_end_line
)
if child_shard is not None:
children.append(child_shard)
if len(child_tags) > 0:
tags.extend(child_tags)
if len(children) == 0 and not enforce_shard:
return None, tags
if is_first_block_heading or is_first_block_only_with_marker:
return merge_into_first_shard(children, start_line, end_line, tags), []
else:
return build_shard(start_line, end_line, tags=tags, children=children), []
def parse_header_shards(
block_tokens: list[BlockToken],
start_line: int,
end_line: int,
use_first_child_as_header: bool = False,
) -> Shard | None:
if len(block_tokens) == 0:
return build_shard(start_line, end_line)
split_at_heading_level = calculate_heading_level_for_next_split(block_tokens)
if split_at_heading_level is None:
return parse_multiple_block_shards(
block_tokens, start_line, end_line, enforce_shard=True
)[0]
heading_positions = find_headings_by_level(block_tokens, split_at_heading_level)
block_tokens_split_by_heading = split_at(block_tokens, heading_positions)
children: list[Shard] = []
for i, child_blocks in enumerate(block_tokens_split_by_heading):
child_start_line = get_line_number(child_blocks[0])
child_end_line = (
get_line_number(block_tokens_split_by_heading[i + 1][0]) - 1
if i + 1 < len(block_tokens_split_by_heading)
else end_line
)
if child_shard := parse_header_shards(
child_blocks,
child_start_line,
child_end_line,
use_first_child_as_header=i > 0 or 0 in heading_positions,
):
children.append(child_shard)
if use_first_child_as_header and len(children) > 0:
return merge_into_first_shard(children, start_line, end_line)
else:
return build_shard(start_line, end_line, children=children)
def parse_markdown_file(file_name: str, file_content: str) -> StreamFile:
shard = build_shard(1, max([len(file_content.splitlines()), 1]))
with TagMarkdownRenderer():
ast = Document(file_content)
block_tokens: list[BlockToken] = ast.children # pyright: ignore[reportAssignmentType]
if len(block_tokens) > 0:
if parsed_shard := parse_header_shards(
block_tokens, shard.start_line, shard.end_line
):
shard = parsed_shard
return StreamFile(shard=shard, file_name=file_name)
__all__ = ["Shard", "StreamFile", "parse_markdown_file"]

View file

@ -1,19 +0,0 @@
from __future__ import annotations
from pydantic import BaseModel
class Shard(BaseModel):
markers: list[str] = []
tags: list[str] = []
start_line: int
end_line: int
children: list[Shard] = []
class StreamFile(BaseModel):
file_name: str
shard: Shard | None = None
__all__ = ["Shard", "StreamFile"]

View file

@ -1,3 +0,0 @@
from .find import find_shard, find_shard_by_position
__all__ = ["find_shard_by_position", "find_shard"]

View file

@ -1,36 +0,0 @@
from typing import Callable
from streamd.localize import LocalizedShard
def find_shard(
shards: list[LocalizedShard], query_function: Callable[[LocalizedShard], bool]
) -> list[LocalizedShard]:
found_shards: list[LocalizedShard] = []
for shard in shards:
if query_function(shard):
found_shards.append(shard)
found_shards.extend(find_shard(shard.children, query_function))
return found_shards
def find_shard_by_position(
shards: list[LocalizedShard], dimension: str, value: str
) -> list[LocalizedShard]:
return find_shard(
shards,
lambda shard: (
dimension in shard.location and shard.location[dimension] == value
),
)
def find_shard_by_set_dimension(
shards: list[LocalizedShard], dimension: str
) -> list[LocalizedShard]:
return find_shard(shards, lambda shard: dimension in shard.location)
__all__ = ["find_shard_by_position", "find_shard", "find_shard_by_set_dimension"]

View file

@ -1,38 +0,0 @@
import os
from typing import ClassVar, override
from pydantic_settings import (
BaseSettings,
PydanticBaseSettingsSource,
SettingsConfigDict,
YamlConfigSettingsSource,
)
from xdg_base_dirs import xdg_config_home
SETTINGS_FILE = xdg_config_home() / "streamd" / "config.yaml"
class Settings(BaseSettings):
model_config: ClassVar[SettingsConfigDict] = SettingsConfigDict(
env_file_encoding="utf-8"
)
base_folder: str = os.getcwd()
@classmethod
@override
def settings_customise_sources(
cls,
settings_cls: type[BaseSettings],
init_settings: PydanticBaseSettingsSource,
env_settings: PydanticBaseSettingsSource,
dotenv_settings: PydanticBaseSettingsSource,
file_secret_settings: PydanticBaseSettingsSource,
) -> tuple[PydanticBaseSettingsSource, ...]:
return (
init_settings,
YamlConfigSettingsSource(settings_cls, yaml_file=SETTINGS_FILE),
dotenv_settings,
env_settings,
file_secret_settings,
)

View file

@ -1,115 +0,0 @@
from enum import StrEnum
from streamd.localize import RepositoryConfiguration
from streamd.localize.repository_configuration import (
Dimension,
Marker,
MarkerPlacement,
)
TIMESHEET_TAG = "Timesheet"
TIMESHEET_DIMENSION_NAME = "timesheet"
class TimesheetPointType(StrEnum):
Card = "CARD"
SickLeave = "SICK_LEAVE"
Vacation = "VACATION"
Undertime = "UNDERTIME"
Holiday = "HOLIDAY"
Break = "BREAK"
BasicTimesheetConfiguration = RepositoryConfiguration(
dimensions={
TIMESHEET_DIMENSION_NAME: Dimension(
display_name="Timesheet",
comment="Used by Timesheet-Subcommand to create Timecards",
propagate=False,
)
},
markers={
TIMESHEET_TAG: Marker(
display_name="A default time card",
placements=[
MarkerPlacement(
dimension=TIMESHEET_DIMENSION_NAME,
value=TimesheetPointType.Card.value,
overwrites=False,
)
],
),
"VacationDay": Marker(
display_name="Vacation Day",
placements=[
MarkerPlacement(
if_with={TIMESHEET_TAG},
dimension=TIMESHEET_DIMENSION_NAME,
value=TimesheetPointType.Vacation.value,
)
],
),
"Break": Marker(
display_name="Break",
placements=[
MarkerPlacement(
if_with={TIMESHEET_TAG},
dimension=TIMESHEET_DIMENSION_NAME,
value=TimesheetPointType.Break.value,
)
],
),
"LunchBreak": Marker(
display_name="Break",
placements=[
MarkerPlacement(
if_with={TIMESHEET_TAG},
dimension=TIMESHEET_DIMENSION_NAME,
value=TimesheetPointType.Break.value,
)
],
),
"Feierabend": Marker(
display_name="Break",
placements=[
MarkerPlacement(
if_with={TIMESHEET_TAG},
dimension=TIMESHEET_DIMENSION_NAME,
value=TimesheetPointType.Break.value,
)
],
),
"Holiday": Marker(
display_name="Offical Holiday",
placements=[
MarkerPlacement(
if_with={TIMESHEET_TAG},
dimension=TIMESHEET_DIMENSION_NAME,
value=TimesheetPointType.Holiday.value,
)
],
),
"SickLeave": Marker(
display_name="Sick Leave",
placements=[
MarkerPlacement(
if_with={TIMESHEET_TAG},
dimension=TIMESHEET_DIMENSION_NAME,
value=TimesheetPointType.SickLeave.value,
)
],
),
"UndertimeDay": Marker(
display_name="Undertime Leave",
placements=[
MarkerPlacement(
if_with={TIMESHEET_TAG},
dimension=TIMESHEET_DIMENSION_NAME,
value=TimesheetPointType.Undertime.value,
)
],
),
},
)
__all__ = ["BasicTimesheetConfiguration", "TIMESHEET_TAG", "TIMESHEET_DIMENSION_NAME"]

View file

@ -1,113 +0,0 @@
from datetime import datetime
from itertools import groupby
from pydantic import BaseModel
from streamd.localize import LocalizedShard
from streamd.query.find import find_shard_by_set_dimension
from .configuration import TIMESHEET_DIMENSION_NAME, TimesheetPointType
from .timecard import SpecialDayType, Timecard, Timesheet
class TimesheetPoint(BaseModel):
moment: datetime
type: TimesheetPointType
def shard_to_timesheet_point(shard: LocalizedShard) -> TimesheetPoint:
return TimesheetPoint(
moment=shard.moment,
type=TimesheetPointType(shard.location[TIMESHEET_DIMENSION_NAME]),
)
def shards_to_timesheet_points(shards: list[LocalizedShard]) -> list[TimesheetPoint]:
return list(
map(
shard_to_timesheet_point,
find_shard_by_set_dimension(shards, TIMESHEET_DIMENSION_NAME),
)
)
def aggregate_timecard_day(points: list[TimesheetPoint]) -> Timesheet | None:
sorted_points = sorted(points, key=lambda point: point.moment)
is_sick_leave = False
special_day_type = None
card_date = sorted_points[0].moment.date()
# We expect timesheet points to alternate between "Card" (start work) and
# "Break" (end work). Starting in "break" means we are not currently in a
# work block until we see the first Card.
last_is_break = True
last_time = sorted_points[0].moment.time()
timecards: list[Timecard] = []
for point in sorted_points:
if point.moment.date() != card_date:
raise ValueError("Dates of all given timesheet days should be consistent")
point_time = point.moment.time()
match point.type:
case TimesheetPointType.Holiday:
if special_day_type is not None:
raise ValueError(
f"{card_date} is both {point.type} and {special_day_type}"
)
special_day_type = SpecialDayType.Holiday
case TimesheetPointType.Vacation:
if special_day_type is not None:
raise ValueError(
f"{card_date} is both {point.type} and {special_day_type}"
)
special_day_type = SpecialDayType.Vacation
case TimesheetPointType.Undertime:
if special_day_type is not None:
raise ValueError(
f"{card_date} is both {point.type} and {special_day_type}"
)
special_day_type = SpecialDayType.Undertime
case TimesheetPointType.SickLeave:
is_sick_leave = True
case TimesheetPointType.Break:
if not last_is_break:
timecards.append(Timecard(from_time=last_time, to_time=point_time))
last_is_break = True
last_time = point_time
case TimesheetPointType.Card:
if last_is_break:
last_is_break = False
last_time = point_time
if not last_is_break:
raise ValueError(f"Last Timecard of {card_date} is not a break!")
if len(timecards) == 0 and not is_sick_leave and special_day_type is None:
return None
return Timesheet(
date=card_date,
is_sick_leave=is_sick_leave,
special_day_type=special_day_type,
timecards=timecards,
)
def aggregate_timecards(points: list[TimesheetPoint]) -> list[Timesheet]:
day_timecards = [
aggregate_timecard_day(list(timecard))
for _date, timecard in groupby(points, key=lambda point: point.moment.date())
]
return [timecard for timecard in day_timecards if timecard is not None]
def extract_timesheets(shards: list[LocalizedShard]) -> list[Timesheet]:
points = shards_to_timesheet_points(shards)
return aggregate_timecards(points)
__all__ = ["extract_timesheets"]

View file

@ -1,23 +0,0 @@
from datetime import date, time
from enum import StrEnum
from pydantic import BaseModel
class SpecialDayType(StrEnum):
Vacation = "VACATION"
Undertime = "UNDERTIME"
Holiday = "HOLIDAY"
Weekend = "WEEKEND"
class Timecard(BaseModel):
from_time: time
to_time: time
class Timesheet(BaseModel):
date: date
is_sick_leave: bool = False
special_day_type: SpecialDayType | None = None
timecards: list[Timecard]

View file

@ -0,0 +1,84 @@
use once_cell::sync::Lazy;
use crate::models::{Dimension, Marker, MarkerPlacement, RepositoryConfiguration};
use super::TimesheetPointType;
pub const TIMESHEET_TAG: &str = "Timesheet";
pub const TIMESHEET_DIMENSION_NAME: &str = "timesheet";
/// Pre-configured repository configuration for timesheet tracking.
#[allow(non_upper_case_globals)]
pub static BasicTimesheetConfiguration: Lazy<RepositoryConfiguration> = Lazy::new(|| {
RepositoryConfiguration::new()
.with_dimension(
TIMESHEET_DIMENSION_NAME,
Dimension::new("Timesheet")
.with_comment("Used by Timesheet-Subcommand to create Timecards")
.with_propagate(false),
)
.with_marker(
TIMESHEET_TAG,
Marker::new("A default time card").with_placements(vec![MarkerPlacement::new(
TIMESHEET_DIMENSION_NAME,
)
.with_value(TimesheetPointType::Card.as_str())
.with_overwrites(false)]),
)
.with_marker(
"VacationDay",
Marker::new("Vacation Day").with_placements(vec![MarkerPlacement::new(
TIMESHEET_DIMENSION_NAME,
)
.with_if_with(vec![TIMESHEET_TAG])
.with_value(TimesheetPointType::Vacation.as_str())]),
)
.with_marker(
"Break",
Marker::new("Break").with_placements(vec![MarkerPlacement::new(
TIMESHEET_DIMENSION_NAME,
)
.with_if_with(vec![TIMESHEET_TAG])
.with_value(TimesheetPointType::Break.as_str())]),
)
.with_marker(
"LunchBreak",
Marker::new("Break").with_placements(vec![MarkerPlacement::new(
TIMESHEET_DIMENSION_NAME,
)
.with_if_with(vec![TIMESHEET_TAG])
.with_value(TimesheetPointType::Break.as_str())]),
)
.with_marker(
"Feierabend",
Marker::new("Break").with_placements(vec![MarkerPlacement::new(
TIMESHEET_DIMENSION_NAME,
)
.with_if_with(vec![TIMESHEET_TAG])
.with_value(TimesheetPointType::Break.as_str())]),
)
.with_marker(
"Holiday",
Marker::new("Official Holiday").with_placements(vec![MarkerPlacement::new(
TIMESHEET_DIMENSION_NAME,
)
.with_if_with(vec![TIMESHEET_TAG])
.with_value(TimesheetPointType::Holiday.as_str())]),
)
.with_marker(
"SickLeave",
Marker::new("Sick Leave").with_placements(vec![MarkerPlacement::new(
TIMESHEET_DIMENSION_NAME,
)
.with_if_with(vec![TIMESHEET_TAG])
.with_value(TimesheetPointType::SickLeave.as_str())]),
)
.with_marker(
"UndertimeDay",
Marker::new("Undertime Leave").with_placements(vec![MarkerPlacement::new(
TIMESHEET_DIMENSION_NAME,
)
.with_if_with(vec![TIMESHEET_TAG])
.with_value(TimesheetPointType::Undertime.as_str())]),
)
});

537
src/timesheet/extract.rs Normal file
View file

@ -0,0 +1,537 @@
use chrono::{DateTime, Utc};
use itertools::Itertools;
use crate::error::StreamdError;
use crate::models::{LocalizedShard, SpecialDayType, Timecard, Timesheet};
use crate::query::find_shard_by_set_dimension;
use super::configuration::TIMESHEET_DIMENSION_NAME;
use super::TimesheetPointType;
/// A point in time with an associated timesheet type.
#[derive(Debug, Clone)]
struct TimesheetPoint {
moment: DateTime<Utc>,
point_type: TimesheetPointType,
}
/// Convert a localized shard to a timesheet point.
fn shard_to_timesheet_point(shard: &LocalizedShard) -> Option<TimesheetPoint> {
let type_str = shard.location.get(TIMESHEET_DIMENSION_NAME)?;
let point_type = type_str.parse::<TimesheetPointType>().ok()?;
Some(TimesheetPoint {
moment: shard.moment,
point_type,
})
}
/// Convert localized shards to timesheet points.
fn shards_to_timesheet_points(shards: &[LocalizedShard]) -> Vec<TimesheetPoint> {
find_shard_by_set_dimension(shards, TIMESHEET_DIMENSION_NAME)
.iter()
.filter_map(shard_to_timesheet_point)
.collect()
}
/// Aggregate timesheet points for a single day into a Timesheet.
fn aggregate_timecard_day(points: &[TimesheetPoint]) -> Result<Option<Timesheet>, StreamdError> {
if points.is_empty() {
return Ok(None);
}
let sorted_points: Vec<_> = {
let mut pts = points.to_vec();
pts.sort_by_key(|p| p.moment);
pts
};
let card_date = sorted_points[0].moment.date_naive();
let mut is_sick_leave = false;
let mut special_day_type: Option<SpecialDayType> = None;
// State machine: starting in "break" mode (not working)
let mut last_is_break = true;
let mut last_time = sorted_points[0].moment.time();
let mut timecards: Vec<Timecard> = Vec::new();
for point in &sorted_points {
if point.moment.date_naive() != card_date {
return Err(StreamdError::TimesheetError(
"Dates of all given timesheet days should be consistent".to_string(),
));
}
let point_time = point.moment.time();
match point.point_type {
TimesheetPointType::Holiday => {
if special_day_type.is_some() {
return Err(StreamdError::TimesheetError(format!(
"{} is both {:?} and {:?}",
card_date, point.point_type, special_day_type
)));
}
special_day_type = Some(SpecialDayType::Holiday);
}
TimesheetPointType::Vacation => {
if special_day_type.is_some() {
return Err(StreamdError::TimesheetError(format!(
"{} is both {:?} and {:?}",
card_date, point.point_type, special_day_type
)));
}
special_day_type = Some(SpecialDayType::Vacation);
}
TimesheetPointType::Undertime => {
if special_day_type.is_some() {
return Err(StreamdError::TimesheetError(format!(
"{} is both {:?} and {:?}",
card_date, point.point_type, special_day_type
)));
}
special_day_type = Some(SpecialDayType::Undertime);
}
TimesheetPointType::SickLeave => {
is_sick_leave = true;
}
TimesheetPointType::Break => {
if !last_is_break {
timecards.push(Timecard::new(last_time, point_time));
last_is_break = true;
last_time = point_time;
}
}
TimesheetPointType::Card => {
if last_is_break {
last_is_break = false;
last_time = point_time;
}
}
}
}
// Check that we ended in break mode
if !last_is_break {
return Err(StreamdError::TimesheetError(format!(
"Last Timecard of {} is not a break!",
card_date
)));
}
// Only return a timesheet if there's meaningful data
if timecards.is_empty() && !is_sick_leave && special_day_type.is_none() {
return Ok(None);
}
Ok(Some(Timesheet {
date: card_date,
is_sick_leave,
special_day_type,
timecards,
}))
}
/// Aggregate timesheet points into timesheets, grouped by day.
fn aggregate_timecards(points: &[TimesheetPoint]) -> Result<Vec<Timesheet>, StreamdError> {
let mut timesheets = Vec::new();
// Group by date
for (_date, group) in &points.iter().chunk_by(|p| p.moment.date_naive()) {
let day_points: Vec<_> = group.cloned().collect();
if let Some(timesheet) = aggregate_timecard_day(&day_points)? {
timesheets.push(timesheet);
}
}
Ok(timesheets)
}
/// Extract timesheets from localized shards.
pub fn extract_timesheets(shards: &[LocalizedShard]) -> Result<Vec<Timesheet>, StreamdError> {
let points = shards_to_timesheet_points(shards);
aggregate_timecards(&points)
}
#[cfg(test)]
mod tests {
use super::*;
use chrono::{NaiveTime, TimeZone};
use indexmap::IndexMap;
fn point(at: DateTime<Utc>, point_type: TimesheetPointType) -> LocalizedShard {
let mut location = IndexMap::new();
location.insert(
TIMESHEET_DIMENSION_NAME.to_string(),
point_type.as_str().to_string(),
);
location.insert("file".to_string(), "dummy.md".to_string());
LocalizedShard {
moment: at,
markers: vec!["Timesheet".to_string()],
tags: vec![],
start_line: 1,
end_line: 1,
children: vec![],
location,
}
}
#[test]
fn test_single_work_block() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(9, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day.with_time(NaiveTime::from_hms_opt(17, 30, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards).unwrap();
assert_eq!(result.len(), 1);
assert_eq!(result[0].date, day.date_naive());
assert!(!result[0].is_sick_leave);
assert!(result[0].special_day_type.is_none());
assert_eq!(result[0].timecards.len(), 1);
assert_eq!(
result[0].timecards[0].from_time,
NaiveTime::from_hms_opt(9, 0, 0).unwrap()
);
assert_eq!(
result[0].timecards[0].to_time,
NaiveTime::from_hms_opt(17, 30, 0).unwrap()
);
}
#[test]
fn test_three_work_blocks_separated_by_breaks() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(7, 15, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day.with_time(NaiveTime::from_hms_opt(12, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
point(
day.with_time(NaiveTime::from_hms_opt(12, 45, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day.with_time(NaiveTime::from_hms_opt(15, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
point(
day.with_time(NaiveTime::from_hms_opt(16, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day.with_time(NaiveTime::from_hms_opt(17, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards).unwrap();
assert_eq!(result.len(), 1);
assert_eq!(result[0].timecards.len(), 3);
assert_eq!(
result[0].timecards[0].from_time,
NaiveTime::from_hms_opt(7, 15, 0).unwrap()
);
assert_eq!(
result[0].timecards[0].to_time,
NaiveTime::from_hms_opt(12, 0, 0).unwrap()
);
}
#[test]
fn test_input_order_is_not_required_within_a_day() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(15, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
point(
day.with_time(NaiveTime::from_hms_opt(7, 15, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day.with_time(NaiveTime::from_hms_opt(12, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
point(
day.with_time(NaiveTime::from_hms_opt(12, 45, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day.with_time(NaiveTime::from_hms_opt(17, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
point(
day.with_time(NaiveTime::from_hms_opt(16, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
];
let result = extract_timesheets(&shards).unwrap();
assert_eq!(result.len(), 1);
assert_eq!(result[0].timecards.len(), 3);
}
#[test]
fn test_groups_by_day() {
let day1 = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let day2 = Utc.with_ymd_and_hms(2026, 2, 2, 0, 0, 0).unwrap();
let shards = vec![
point(
day1.with_time(NaiveTime::from_hms_opt(9, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day1.with_time(NaiveTime::from_hms_opt(17, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
point(
day2.with_time(NaiveTime::from_hms_opt(10, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day2.with_time(NaiveTime::from_hms_opt(18, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards).unwrap();
assert_eq!(result.len(), 2);
assert_eq!(result[0].date, day1.date_naive());
assert_eq!(result[1].date, day2.date_naive());
}
#[test]
fn test_day_with_only_special_day_type_vacation() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(8, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Vacation,
),
point(
day.with_time(NaiveTime::from_hms_opt(9, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards).unwrap();
assert_eq!(result.len(), 1);
assert_eq!(result[0].special_day_type, Some(SpecialDayType::Vacation));
assert!(result[0].timecards.is_empty());
}
#[test]
fn test_day_with_only_special_day_type_holiday() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(8, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Holiday,
),
point(
day.with_time(NaiveTime::from_hms_opt(9, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards).unwrap();
assert_eq!(result.len(), 1);
assert_eq!(result[0].special_day_type, Some(SpecialDayType::Holiday));
}
#[test]
fn test_day_with_only_special_day_type_undertime() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(8, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Undertime,
),
point(
day.with_time(NaiveTime::from_hms_opt(9, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards).unwrap();
assert_eq!(result.len(), 1);
assert_eq!(result[0].special_day_type, Some(SpecialDayType::Undertime));
}
#[test]
fn test_day_with_sick_leave_and_timecards() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(7, 30, 0).unwrap())
.unwrap(),
TimesheetPointType::SickLeave,
),
point(
day.with_time(NaiveTime::from_hms_opt(9, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day.with_time(NaiveTime::from_hms_opt(12, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards).unwrap();
assert_eq!(result.len(), 1);
assert!(result[0].is_sick_leave);
assert_eq!(result[0].timecards.len(), 1);
}
#[test]
fn test_day_with_sick_leave_only() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(8, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::SickLeave,
),
point(
day.with_time(NaiveTime::from_hms_opt(9, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards).unwrap();
assert_eq!(result.len(), 1);
assert!(result[0].is_sick_leave);
assert!(result[0].timecards.is_empty());
}
#[test]
fn test_empty_input() {
let result = extract_timesheets(&[]).unwrap();
assert!(result.is_empty());
}
#[test]
fn test_day_with_only_cards_and_no_break_is_invalid() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(9, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
point(
day.with_time(NaiveTime::from_hms_opt(12, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Card,
),
];
let result = extract_timesheets(&shards);
assert!(result.is_err());
let err = result.unwrap_err();
assert!(err.to_string().contains("not a break"));
}
#[test]
fn test_two_special_day_types_same_day_is_invalid() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(8, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Vacation,
),
point(
day.with_time(NaiveTime::from_hms_opt(8, 5, 0).unwrap())
.unwrap(),
TimesheetPointType::Holiday,
),
point(
day.with_time(NaiveTime::from_hms_opt(9, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards);
assert!(result.is_err());
let err = result.unwrap_err();
assert!(err.to_string().contains("is both"));
}
#[test]
fn test_day_with_only_breaks_is_ignored() {
let day = Utc.with_ymd_and_hms(2026, 2, 1, 0, 0, 0).unwrap();
let shards = vec![
point(
day.with_time(NaiveTime::from_hms_opt(12, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
point(
day.with_time(NaiveTime::from_hms_opt(13, 0, 0).unwrap())
.unwrap(),
TimesheetPointType::Break,
),
];
let result = extract_timesheets(&shards).unwrap();
assert!(result.is_empty());
}
}

7
src/timesheet/mod.rs Normal file
View file

@ -0,0 +1,7 @@
mod configuration;
mod extract;
mod point_types;
pub use configuration::{BasicTimesheetConfiguration, TIMESHEET_DIMENSION_NAME, TIMESHEET_TAG};
pub use extract::extract_timesheets;
pub use point_types::TimesheetPointType;

View file

@ -0,0 +1,54 @@
use serde::{Deserialize, Serialize};
use std::str::FromStr;
/// Type of timesheet point for time tracking.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum TimesheetPointType {
#[serde(rename = "CARD")]
Card,
#[serde(rename = "SICK_LEAVE")]
SickLeave,
#[serde(rename = "VACATION")]
Vacation,
#[serde(rename = "UNDERTIME")]
Undertime,
#[serde(rename = "HOLIDAY")]
Holiday,
#[serde(rename = "BREAK")]
Break,
}
impl TimesheetPointType {
pub fn as_str(&self) -> &'static str {
match self {
TimesheetPointType::Card => "CARD",
TimesheetPointType::SickLeave => "SICK_LEAVE",
TimesheetPointType::Vacation => "VACATION",
TimesheetPointType::Undertime => "UNDERTIME",
TimesheetPointType::Holiday => "HOLIDAY",
TimesheetPointType::Break => "BREAK",
}
}
}
impl std::fmt::Display for TimesheetPointType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str())
}
}
impl FromStr for TimesheetPointType {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"CARD" => Ok(TimesheetPointType::Card),
"SICK_LEAVE" => Ok(TimesheetPointType::SickLeave),
"VACATION" => Ok(TimesheetPointType::Vacation),
"UNDERTIME" => Ok(TimesheetPointType::Undertime),
"HOLIDAY" => Ok(TimesheetPointType::Holiday),
"BREAK" => Ok(TimesheetPointType::Break),
_ => Err(format!("Unknown timesheet point type: {}", s)),
}
}
}

View file

@ -1,157 +0,0 @@
from datetime import date, datetime, time
from streamd.localize.extract_datetime import (
extract_date_from_marker,
extract_datetime_from_file_name,
extract_datetime_from_marker,
extract_datetime_from_marker_list,
extract_time_from_marker,
)
class TestExtractDateTime:
def test_extract_date_from_file_name_valid(self):
file_name = "20230101-123456 Some Text.md"
assert datetime(2023, 1, 1, 12, 34, 56) == extract_datetime_from_file_name(
file_name
)
def test_extract_date_from_file_name_invalid(self):
file_name = "invalid-file-name.md"
assert extract_datetime_from_file_name(file_name) is None
def test_extract_date_from_file_name_without_time(self):
file_name = "20230101 Some Text.md"
assert datetime(2023, 1, 1, 0, 0, 0) == extract_datetime_from_file_name(
file_name
)
def test_extract_date_from_file_name_short_time(self):
file_name = "20230101-1234 Some Text.md"
assert datetime(2023, 1, 1, 12, 34, 0) == extract_datetime_from_file_name(
file_name
)
def test_extract_date_from_file_name_empty_string(self):
file_name = ""
assert extract_datetime_from_file_name(file_name) is None
def test_extract_date_from_file_name_with_full_path(self):
file_name = "/path/to/20230101-123456 Some Text.md"
assert datetime(2023, 1, 1, 12, 34, 56) == extract_datetime_from_file_name(
file_name
)
class TestExtractMarkerDateTime:
def test_extract_datetime_from_marker_valid(self):
marker = "20250101150000"
assert datetime(2025, 1, 1, 15, 0, 0) == extract_datetime_from_marker(marker)
def test_extract_datetime_from_marker_invalid_format(self):
assert extract_datetime_from_marker("2025010115000") is None # too short
assert extract_datetime_from_marker("202501011500000") is None # too long
assert extract_datetime_from_marker("2025-01-01T150000") is None # separators
assert extract_datetime_from_marker("2025010115000a") is None # non-digit
assert extract_datetime_from_marker("") is None
def test_extract_datetime_from_marker_invalid_values(self):
assert extract_datetime_from_marker("20250230120000") is None # Feb 30
assert extract_datetime_from_marker("20250101126000") is None # minute 60
assert extract_datetime_from_marker("20250101240000") is None # hour 24
class TestExtractMarkerDate:
def test_extract_date_from_marker_valid(self):
marker = "20250101"
assert date(2025, 1, 1) == extract_date_from_marker(marker)
def test_extract_date_from_marker_invalid_format(self):
assert extract_date_from_marker("2025010") is None # too short
assert extract_date_from_marker("202501011") is None # too long
assert extract_date_from_marker("2025-01-01") is None # separators
assert extract_date_from_marker("2025010a") is None # non-digit
assert extract_date_from_marker("") is None
def test_extract_date_from_marker_invalid_values(self):
assert extract_date_from_marker("20250230") is None # Feb 30
assert extract_date_from_marker("20251301") is None # month 13
assert extract_date_from_marker("20250132") is None # day 32
class TestExtractMarkerTime:
def test_extract_time_from_marker_valid(self):
marker = "150000"
assert time(15, 0, 0) == extract_time_from_marker(marker)
def test_extract_time_from_marker_invalid_format(self):
assert extract_time_from_marker("15000") is None # too short
assert extract_time_from_marker("1500000") is None # too long
assert extract_time_from_marker("15:00:00") is None # separators
assert extract_time_from_marker("15000a") is None # non-digit
assert extract_time_from_marker("") is None
def test_extract_time_from_marker_invalid_values(self):
assert extract_time_from_marker("240000") is None # hour 24
assert extract_time_from_marker("156000") is None # minute 60
assert extract_time_from_marker("150060") is None # second 60
class TestExtractDateTimeFromMarkerList:
def test_no_markers_inherits_datetime(self):
inherited = datetime(2025, 1, 2, 3, 4, 5)
assert inherited == extract_datetime_from_marker_list([], inherited)
def test_unrelated_markers_inherits_datetime(self):
inherited = datetime(2025, 1, 2, 3, 4, 5)
markers = ["not-a-marker", "2025-01-01", "1500", "1234567"]
assert inherited == extract_datetime_from_marker_list(markers, inherited)
def test_date_only_marker_sets_midnight(self):
inherited = datetime(2025, 6, 7, 8, 9, 10)
markers = ["20250101"]
assert datetime(2025, 1, 1, 0, 0, 0) == extract_datetime_from_marker_list(
markers, inherited
)
def test_time_only_marker_inherits_date(self):
inherited = datetime(2025, 6, 7, 8, 9, 10)
markers = ["150000"]
assert datetime(2025, 6, 7, 15, 0, 0) == extract_datetime_from_marker_list(
markers, inherited
)
def test_datetime_marker_overrides_both_date_and_time(self):
inherited = datetime(2025, 6, 7, 8, 9, 10)
markers = ["20250101150000"]
assert datetime(2025, 1, 1, 15, 0, 0) == extract_datetime_from_marker_list(
markers, inherited
)
def test_combined_date_and_time_markers(self):
inherited = datetime(2025, 6, 7, 8, 9, 10)
markers = ["20250101", "150000"]
assert datetime(2025, 1, 1, 15, 0, 0) == extract_datetime_from_marker_list(
markers, inherited
)
def test_first_marker_wins_when_multiple_dates_or_times(self):
inherited = datetime(2025, 6, 7, 8, 9, 10)
markers = ["20250101", "150000", "20250102", "160000"]
assert datetime(2025, 1, 1, 15, 0, 0) == extract_datetime_from_marker_list(
markers, inherited
)
def test_last_separated_date_and_time_win(self):
inherited = datetime(2025, 6, 7, 8, 9, 10)
markers = ["20250101", "150000", "20250102160000"]
assert datetime(2025, 1, 1, 15, 0, 0) == extract_datetime_from_marker_list(
markers, inherited
)
def test_invalid_date_or_time_markers_are_ignored(self):
inherited = datetime(2025, 6, 7, 8, 9, 10)
markers = ["20251301", "240000", "20250101", "150000"]
assert datetime(2025, 1, 1, 15, 0, 0) == extract_datetime_from_marker_list(
markers, inherited
)

View file

@ -1,367 +0,0 @@
import pytest
from streamd.localize.repository_configuration import (
Dimension,
Marker,
MarkerPlacement,
RepositoryConfiguration,
merge_dimensions,
merge_markers,
merge_repository_configuration,
merge_single_dimension,
merge_single_marker,
)
class TestMergeSingleDimension:
def test_second_overrides_display_name_when_non_empty(self):
base = Dimension(display_name="Base", comment="c1", propagate=True)
second = Dimension(display_name="Second", comment="c2", propagate=False)
merged = merge_single_dimension(base, second)
assert merged.display_name == "Second"
assert merged.comment == "c2"
assert merged.propagate is False
def test_second_empty_display_name_falls_back_to_base(self):
base = Dimension(display_name="Base", comment="c1", propagate=True)
second = Dimension(display_name="", comment="c2", propagate=False)
merged = merge_single_dimension(base, second)
assert merged.display_name == "Base"
assert merged.comment == "c2"
assert merged.propagate is False
def test_second_comment_none_does_not_erase_base_comment(self):
base = Dimension(display_name="Base", comment="keep", propagate=True)
second = Dimension(display_name="Second", comment=None, propagate=False)
merged = merge_single_dimension(base, second)
assert merged.display_name == "Second"
assert merged.comment == "keep"
def test_second_comment_non_none_overrides_base_comment(self):
base = Dimension(display_name="Base", comment="c1", propagate=True)
second = Dimension(display_name="Second", comment="c2", propagate=True)
merged = merge_single_dimension(base, second)
assert merged.comment == "c2"
def test_second_propagate_overrides_base_when_provided(self):
base = Dimension(display_name="Base", comment="c1", propagate=True)
second = Dimension(display_name="Second", comment="c2", propagate=False)
merged = merge_single_dimension(base, second)
assert merged.propagate is False
def test_propagate_merging_retains_base_when_second_not_provided(self):
base = Dimension(display_name="Base", comment="c1", propagate=True)
second = Dimension(display_name="Second", comment="c2")
merged = merge_single_dimension(base, second)
assert merged.propagate is True
class TestMergeDimensions:
def test_adds_new_keys_from_second(self):
base = {"a": Dimension(display_name="A", propagate=True)}
second = {"b": Dimension(display_name="B", propagate=False)}
merged = merge_dimensions(base, second)
assert set(merged.keys()) == {"a", "b"}
assert merged["a"].display_name == "A"
assert merged["b"].display_name == "B"
def test_merges_existing_keys(self):
base = {"a": Dimension(display_name="A", comment="c1", propagate=True)}
second = {"a": Dimension(display_name="A2", comment=None, propagate=False)}
merged = merge_dimensions(base, second)
assert merged["a"].display_name == "A2"
assert merged["a"].comment == "c1"
assert merged["a"].propagate is False
def test_does_not_mutate_inputs(self):
base = {"a": Dimension(display_name="A", comment="c1", propagate=True)}
second = {"b": Dimension(display_name="B", comment="c2", propagate=False)}
merged = merge_dimensions(base, second)
assert "b" not in base
assert "a" not in second
assert set(merged.keys()) == {"a", "b"}
class TestMergeSingleMarker:
def test_second_overrides_display_name_when_non_empty(self):
base = Marker(
display_name="Base",
placements=[MarkerPlacement(dimension="project", value=None)],
)
second = Marker(
display_name="Second",
placements=[MarkerPlacement(dimension="timesheet", value="coding")],
)
merged = merge_single_marker(base, second)
assert merged.display_name == "Second"
assert merged.placements == [
MarkerPlacement(dimension="project", value=None, if_with=set()),
MarkerPlacement(dimension="timesheet", value="coding", if_with=set()),
]
def test_second_empty_display_name_falls_back_to_base(self):
base = Marker(display_name="Base", placements=[])
second = Marker(display_name="", placements=[])
merged = merge_single_marker(base, second)
assert merged.display_name == "Base"
def test_appends_new_placements(self):
base = Marker(
display_name="Base",
placements=[
MarkerPlacement(dimension="project"),
],
)
second = Marker(
display_name="Second",
placements=[
MarkerPlacement(
if_with={"Timesheet"}, dimension="timesheet", value="x"
),
],
)
merged = merge_single_marker(base, second)
assert merged.placements == [
MarkerPlacement(dimension="project"),
MarkerPlacement(if_with={"Timesheet"}, dimension="timesheet", value="x"),
]
def test_deduplicates_by_identity_and_second_overrides_base(self):
base = Marker(
display_name="Base",
placements=[
MarkerPlacement(if_with={"A"}, dimension="d", value="v"),
MarkerPlacement(if_with={"B"}, dimension="d", value="v2"),
],
)
second = Marker(
display_name="Second",
placements=[
MarkerPlacement(if_with={"A"}, dimension="d", value="v"),
MarkerPlacement(if_with={"C"}, dimension="d", value="v3"),
],
)
merged = merge_single_marker(base, second)
assert merged.placements == [
MarkerPlacement(if_with={"A"}, dimension="d", value="v"),
MarkerPlacement(if_with={"B"}, dimension="d", value="v2"),
MarkerPlacement(if_with={"C"}, dimension="d", value="v3"),
]
def test_identity_is_order_insensitive_for_if_with(self):
base = Marker(
display_name="Base",
placements=[MarkerPlacement(if_with={"A", "B"}, dimension="d", value="v")],
)
second = Marker(
display_name="Second",
placements=[MarkerPlacement(if_with={"B", "A"}, dimension="d", value="v2")],
)
merged = merge_single_marker(base, second)
# With `if_with` as a set, identity is order-insensitive; second overrides base.
assert merged.placements == [
MarkerPlacement(if_with={"A", "B"}, dimension="d", value="v2"),
]
class TestMergeMarkers:
def test_adds_new_marker_keys_from_second(self):
base = {"M1": Marker(display_name="M1", placements=[])}
second = {"M2": Marker(display_name="M2", placements=[])}
merged = merge_markers(base, second)
assert set(merged.keys()) == {"M1", "M2"}
def test_merges_existing_marker_keys(self):
base = {
"M": Marker(
display_name="Base",
placements=[MarkerPlacement(dimension="project")],
)
}
second = {
"M": Marker(
display_name="Second",
placements=[
MarkerPlacement(
if_with={"Timesheet"}, dimension="timesheet", value="coding"
)
],
)
}
merged = merge_markers(base, second)
assert merged["M"].display_name == "Second"
assert merged["M"].placements == [
MarkerPlacement(dimension="project", value=None, if_with=set()),
MarkerPlacement(
if_with={"Timesheet"}, dimension="timesheet", value="coding"
),
]
def test_does_not_mutate_inputs(self):
base = {"M1": Marker(display_name="M1", placements=[])}
second = {"M2": Marker(display_name="M2", placements=[])}
merged = merge_markers(base, second)
assert "M2" not in base
assert "M1" not in second
assert set(merged.keys()) == {"M1", "M2"}
class TestMergeRepositoryConfiguration:
def test_merges_dimensions_and_markers(self):
base = RepositoryConfiguration(
dimensions={
"project": Dimension(
display_name="Project", comment="c1", propagate=True
),
"moment": Dimension(
display_name="Moment", comment="c2", propagate=True
),
},
markers={
"Streamd": Marker(
display_name="Streamd",
placements=[MarkerPlacement(dimension="project")],
)
},
)
second = RepositoryConfiguration(
dimensions={
"project": Dimension(display_name="Project2", propagate=False),
"timesheet": Dimension(
display_name="Timesheet", comment="c3", propagate=False
),
},
markers={
"Streamd": Marker(
display_name="Streamd2",
placements=[
MarkerPlacement(
if_with={"Timesheet"}, dimension="timesheet", value="coding"
)
],
),
"JobHunting": Marker(
display_name="JobHunting",
placements=[MarkerPlacement(dimension="project")],
),
},
)
merged = merge_repository_configuration(base, second)
assert set(merged.dimensions.keys()) == {"project", "moment", "timesheet"}
assert merged.dimensions["project"].display_name == "Project2"
assert merged.dimensions["project"].comment == "c1"
assert merged.dimensions["project"].propagate is False
assert merged.dimensions["moment"].display_name == "Moment"
assert merged.dimensions["timesheet"].display_name == "Timesheet"
assert set(merged.markers.keys()) == {"Streamd", "JobHunting"}
assert merged.markers["Streamd"].display_name == "Streamd2"
assert merged.markers["Streamd"].placements == [
MarkerPlacement(dimension="project", value=None, if_with=set()),
MarkerPlacement(
if_with={"Timesheet"}, dimension="timesheet", value="coding"
),
]
assert merged.markers["JobHunting"].placements == [
MarkerPlacement(dimension="project", value=None, if_with=set())
]
def test_does_not_mutate_base_or_second(self):
base = RepositoryConfiguration(
dimensions={"a": Dimension(display_name="A", propagate=True)},
markers={"M": Marker(display_name="M", placements=[])},
)
second = RepositoryConfiguration(
dimensions={"b": Dimension(display_name="B", propagate=False)},
markers={"N": Marker(display_name="N", placements=[])},
)
_ = merge_repository_configuration(base, second)
assert set(base.dimensions.keys()) == {"a"}
assert set(second.dimensions.keys()) == {"b"}
assert set(base.markers.keys()) == {"M"}
assert set(second.markers.keys()) == {"N"}
def test_merge_is_associative_for_non_conflicting_inputs(self):
a = RepositoryConfiguration(
dimensions={"d1": Dimension(display_name="D1", propagate=True)},
markers={"m1": Marker(display_name="M1", placements=[])},
)
b = RepositoryConfiguration(
dimensions={"d2": Dimension(display_name="D2", propagate=False)},
markers={"m2": Marker(display_name="M2", placements=[])},
)
c = RepositoryConfiguration(
dimensions={"d3": Dimension(display_name="D3", propagate=False)},
markers={"m3": Marker(display_name="M3", placements=[])},
)
left = merge_repository_configuration(merge_repository_configuration(a, b), c)
right = merge_repository_configuration(a, merge_repository_configuration(b, c))
assert left == right
assert set(left.dimensions.keys()) == {"d1", "d2", "d3"}
assert set(left.markers.keys()) == {"m1", "m2", "m3"}
@pytest.mark.parametrize(
("base", "second", "expected_propagate"),
[
(
RepositoryConfiguration(
dimensions={"d": Dimension(display_name="D", propagate=True)},
markers={},
),
RepositoryConfiguration(
dimensions={"d": Dimension(display_name="D2")},
markers={},
),
True,
)
],
)
def test_merge_repository_configuration_propagate_preserves_base_when_omitted(
base: RepositoryConfiguration,
second: RepositoryConfiguration,
expected_propagate: bool,
):
merged = merge_repository_configuration(base, second)
assert merged.dimensions["d"].propagate is expected_propagate

View file

@ -1,343 +0,0 @@
from faker import Faker
from streamd.parse import Shard, StreamFile, parse_markdown_file
fake = Faker()
class TestParseProcess:
file_name: str = fake.file_name(extension="md")
def test_parse_empty_file(self):
assert parse_markdown_file(self.file_name, "") == StreamFile(
file_name=self.file_name, shard=Shard(start_line=1, end_line=1)
)
def test_parse_basic_one_line_file(self):
test_file = "Hello World"
assert parse_markdown_file(self.file_name, test_file) == StreamFile(
file_name=self.file_name,
shard=Shard(
start_line=1,
end_line=1,
),
)
def test_parse_basic_multi_line_file(self):
test_file = "Hello World\n\nHello again!"
assert parse_markdown_file(self.file_name, test_file) == StreamFile(
file_name=self.file_name,
shard=Shard(
start_line=1,
end_line=3,
),
)
def test_parse_single_line_with_tag(self):
test_file = "@Tag Hello World"
assert parse_markdown_file(self.file_name, test_file) == StreamFile(
file_name=self.file_name,
shard=Shard(
markers=["Tag"],
start_line=1,
end_line=1,
),
)
def test_parse_single_line_with_two_tags(self):
test_file = "@Marker1 @Marker2 Hello World"
assert parse_markdown_file(self.file_name, test_file) == StreamFile(
file_name=self.file_name,
shard=Shard(
markers=["Marker1", "Marker2"],
start_line=1,
end_line=1,
),
)
def test_parse_single_line_with_two_tags_and_misplaced_tag(self):
test_file = "@Tag1 @Tag2 Hello World @Tag3"
assert parse_markdown_file(self.file_name, test_file) == StreamFile(
file_name=self.file_name,
shard=Shard(
markers=["Tag1", "Tag2"],
tags=["Tag3"],
start_line=1,
end_line=1,
),
)
def test_parse_split_paragraphs_into_shards(self):
file_text = "Hello World!\n\n@Tag1 Block 1\n\n@Tag2 Block 2"
assert parse_markdown_file(self.file_name, file_text) == StreamFile(
file_name=self.file_name,
shard=Shard(
start_line=1,
end_line=5,
children=[
Shard(
markers=["Tag1"],
start_line=3,
end_line=3,
),
Shard(
markers=["Tag2"],
start_line=5,
end_line=5,
),
],
),
)
def test_parse_split_paragraph_with_inner_tags_at_more_positions(self):
file_text = "Hello @Tag1 World!\n\n@Marker Block 1\n\nBlock 2 @Tag2"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
tags=["Tag1", "Tag2"],
start_line=1,
end_line=5,
children=[
Shard(markers=["Marker"], start_line=3, end_line=3, children=[]),
],
)
def test_parse_header_without_markers(self):
file_text = "# Heading\n\n## Subheading"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
start_line=1,
end_line=3,
)
def test_parse_split_at_heading_if_marker_on_subheading(self):
file_text = "# Heading @Tag1\n\n## @Marker1 Subheading @Tag2\n\n# Heading @Tag3"
assert parse_markdown_file(self.file_name, file_text) == StreamFile(
file_name=self.file_name,
shard=Shard(
start_line=1,
end_line=5,
children=[
Shard(
tags=["Tag1"],
start_line=1,
end_line=4,
children=[
Shard(
markers=["Marker1"],
tags=["Tag2"],
start_line=3,
end_line=4,
),
],
),
Shard(tags=["Tag3"], start_line=5, end_line=5, children=[]),
],
),
)
def test_parse_only_parse_releveant_levels(self):
file_text = "# @Marker1 Heading @Tag1\n\n## Subheading @Tag2"
assert parse_markdown_file(self.file_name, file_text) == StreamFile(
file_name=self.file_name,
shard=Shard(
markers=["Marker1"],
tags=["Tag1", "Tag2"],
start_line=1,
end_line=3,
),
)
def test_parse_fullly_before_headings_start(self):
file_text = "Hello\n\n@Marker1 World!\n\n# @Marker2 I'm a heading!"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
start_line=1,
end_line=5,
children=[
Shard(
start_line=1,
end_line=4,
children=[
Shard(
markers=["Marker1"],
start_line=3,
end_line=3,
)
],
),
Shard(markers=["Marker2"], start_line=5, end_line=5, children=[]),
],
)
def test_parse_complex_heading_structure(self):
file_text = "Preamble @Preamble\n## @Intro\n# @Title\n## @Chapter1\n## @Chapter2\n### Section 1\n### Section 2"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
start_line=1,
end_line=7,
children=[
Shard(
start_line=1,
end_line=2,
children=[
Shard(
tags=["Preamble"],
start_line=1,
end_line=1,
),
Shard(
markers=["Intro"],
start_line=2,
end_line=2,
),
],
),
Shard(
markers=["Title"],
start_line=3,
end_line=7,
children=[
Shard(
markers=["Chapter1"],
start_line=4,
end_line=4,
),
Shard(
markers=["Chapter2"],
start_line=5,
end_line=7,
),
],
),
],
)
def test_simple_list(self):
file_text = "* hello world\n * @Marker i've got a marker"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
markers=[],
tags=[],
start_line=1,
end_line=2,
children=[
Shard(
markers=["Marker"], tags=[], start_line=2, end_line=2, children=[]
)
],
)
def test_parse_complex_list(self):
file_text = """* I'm the parent!
* @Marker1 I've got a marker\n
* I've got no marker!
* I've got a child with a marker!
* @Marker2 I'm the child with the marker
"""
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
markers=[],
tags=[],
start_line=1,
end_line=6,
children=[
Shard(
markers=[],
tags=[],
start_line=2,
end_line=6,
children=[
Shard(
markers=["Marker1"],
tags=[],
start_line=2,
end_line=3,
children=[],
),
Shard(
markers=[],
tags=[],
start_line=5,
end_line=6,
children=[
Shard(
markers=["Marker2"],
tags=[],
start_line=6,
end_line=6,
children=[],
)
],
),
],
)
],
)
def test_parse_ignores_tags_in_code(self):
file_text = "```\n@Marker\n```"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
markers=[],
tags=[],
start_line=1,
end_line=3,
children=[],
)
def test_parse_finds_tags_in_italic_text(self):
file_text = "*@ItalicMarker*"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
markers=["ItalicMarker"],
tags=[],
start_line=1,
end_line=1,
children=[],
)
def test_parse_finds_tags_in_bold_text(self):
file_text = "**@BoldMarker**"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
markers=["BoldMarker"],
tags=[],
start_line=1,
end_line=1,
children=[],
)
def test_parse_finds_tags_in_strikethrough_text(self):
file_text = "~~@StrikeMarker~~"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
markers=["StrikeMarker"],
tags=[],
start_line=1,
end_line=1,
children=[],
)
def test_parse_finds_tags_in_link(self):
file_text = "[@LinkMarker](https://konstantinfickel.de)"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
markers=["LinkMarker"],
tags=[],
start_line=1,
end_line=1,
children=[],
)
def test_parse_continues_looking_for_markers_after_first_link_marker(self):
file_text = "[@LinkMarker1](https://konstantinfickel.de1) [@LinkMarker2](https://konstantinfickel.de)"
assert parse_markdown_file(self.file_name, file_text).shard == Shard(
markers=["LinkMarker1", "LinkMarker2"],
tags=[],
start_line=1,
end_line=1,
children=[],
)

View file

@ -1,104 +0,0 @@
from __future__ import annotations
from datetime import datetime
from streamd.localize import LocalizedShard
from streamd.query.find import find_shard, find_shard_by_position
def generate_localized_shard(
*,
location: dict[str, str] | None = None,
children: list[LocalizedShard] | None = None,
) -> LocalizedShard:
return LocalizedShard(
start_line=1,
end_line=1,
moment=datetime(2020, 1, 1),
location=location or {},
children=children or [],
markers=[],
tags=[],
)
class TestFindShard:
def test_returns_empty_when_no_match(self) -> None:
root = generate_localized_shard(location={"file": "a.md"})
shards = [root]
result = find_shard(shards, lambda s: "missing" in s.location)
assert result == []
def test_finds_matches_depth_first_and_preserves_order(self) -> None:
grandchild = generate_localized_shard(location={"k": "match"})
child1 = generate_localized_shard(
location={"k": "match"}, children=[grandchild]
)
child2 = generate_localized_shard(location={"k": "nope"})
root = generate_localized_shard(
location={"k": "nope"}, children=[child1, child2]
)
result = find_shard([root], lambda s: s.location.get("k") == "match")
assert result == [child1, grandchild]
def test_includes_root_if_it_matches(self) -> None:
root = generate_localized_shard(
location={"k": "match"},
children=[generate_localized_shard(location={"k": "match"})],
)
result = find_shard([root], lambda s: s.location.get("k") == "match")
assert result[0] is root
assert len(result) == 2
def test_multiple_roots_keeps_left_to_right_order(self) -> None:
a = generate_localized_shard(location={"k": "match"})
b = generate_localized_shard(location={"k": "match"})
c = generate_localized_shard(location={"k": "nope"})
result = find_shard([a, b, c], lambda s: s.location.get("k") == "match")
assert result == [a, b]
def test_query_function_can_use_arbitrary_logic(self) -> None:
# Ensures typing/behavior supports any callable that returns bool.
a = generate_localized_shard(location={"x": "1"})
b = generate_localized_shard(location={"x": "2"})
c = generate_localized_shard(location={"x": "3"})
root = generate_localized_shard(location={}, children=[a, b, c])
def is_even_x(shard: LocalizedShard) -> bool:
x = shard.location.get("x")
return x is not None and int(x) % 2 == 0
result = find_shard([root], is_even_x)
assert result == [b]
class TestFindShardByPosition:
def test_matches_only_when_dimension_present_and_equal(self) -> None:
match = generate_localized_shard(location={"file": "a.md", "line": "10"})
wrong_value = generate_localized_shard(location={"file": "a.md", "line": "11"})
missing_dim = generate_localized_shard(location={"file": "a.md"})
root = generate_localized_shard(
location={"root": "x"}, children=[match, wrong_value, missing_dim]
)
result = find_shard_by_position([root], "line", "10")
assert result == [match]
def test_recurses_through_children(self) -> None:
deep = generate_localized_shard(location={"section": "s1"})
mid = generate_localized_shard(location={"section": "s0"}, children=[deep])
root = generate_localized_shard(location={}, children=[mid])
result = find_shard_by_position([root], "section", "s1")
assert result == [deep]

View file

@ -1,231 +0,0 @@
from datetime import datetime
from streamd.localize.localize import localize_stream_file
from streamd.localize.localized_shard import LocalizedShard
from streamd.localize.repository_configuration import (
Dimension,
Marker,
MarkerPlacement,
RepositoryConfiguration,
)
from streamd.parse.shard import Shard, StreamFile
repository_configuration = RepositoryConfiguration(
dimensions={
"project": Dimension(
display_name="Project",
comment="GTD Project that is being worked on",
propagate=True,
),
"moment": Dimension(
display_name="Moment",
comment="Timestamp this entry was created at",
propagate=True,
),
"timesheet": Dimension(
display_name="Timesheet",
comment="Time Cards for Time Tracking",
propagate=True,
),
},
markers={
"Streamd": Marker(
display_name="Streamd",
placements=[
MarkerPlacement(dimension="project"),
MarkerPlacement(
if_with={"Timesheet"}, dimension="timesheet", value="coding"
),
],
),
"JobHunting": Marker(
display_name="JobHunting", placements=[MarkerPlacement(dimension="project")]
),
},
)
class TestLocalize:
def test_project_simple_stream_file(self):
stream_file = StreamFile(
file_name="20250622-121000 Test File.md",
shard=Shard(start_line=1, end_line=1, markers=["Streamd"]),
)
assert localize_stream_file(
stream_file, repository_configuration
) == LocalizedShard(
moment=datetime(2025, 6, 22, 12, 10, 0, 0),
markers=["Streamd"],
tags=[],
start_line=1,
end_line=1,
children=[],
location={"project": "Streamd", "file": stream_file.file_name},
)
def test_timesheet_use_case(self):
stream_file = StreamFile(
file_name="20260131-210000 Test File.md",
shard=Shard(start_line=1, end_line=1, markers=["Timesheet", "Streamd"]),
)
assert localize_stream_file(
stream_file, repository_configuration
) == LocalizedShard(
moment=datetime(2026, 1, 31, 21, 0, 0, 0),
markers=["Timesheet", "Streamd"],
tags=[],
start_line=1,
end_line=1,
children=[],
location={
"file": stream_file.file_name,
"project": "Streamd",
"timesheet": "coding",
},
)
def test_overwrites_true_propagated_dimension_overwrites_existing_value(self):
config = RepositoryConfiguration(
dimensions={
"project": Dimension(display_name="Project", propagate=True),
},
markers={
"A": Marker(
display_name="A",
placements=[MarkerPlacement(dimension="project", value="a")],
),
"B": Marker(
display_name="B",
placements=[
MarkerPlacement(dimension="project", value="b", overwrites=True)
],
),
},
)
stream_file = StreamFile(
file_name="20260131-210000 Test File.md",
shard=Shard(start_line=1, end_line=1, markers=["A", "B"]),
)
assert localize_stream_file(stream_file, config) == LocalizedShard(
moment=datetime(2026, 1, 31, 21, 0, 0, 0),
markers=["A", "B"],
tags=[],
start_line=1,
end_line=1,
children=[],
location={"file": stream_file.file_name, "project": "b"},
)
def test_overwrites_false_propagated_dimension_does_not_overwrite_existing_value(
self,
):
config = RepositoryConfiguration(
dimensions={
"project": Dimension(display_name="Project", propagate=True),
},
markers={
"A": Marker(
display_name="A",
placements=[MarkerPlacement(dimension="project", value="a")],
),
"B": Marker(
display_name="B",
placements=[
MarkerPlacement(
dimension="project", value="b", overwrites=False
)
],
),
},
)
stream_file = StreamFile(
file_name="20260131-210000 Test File.md",
shard=Shard(start_line=1, end_line=1, markers=["A", "B"]),
)
assert localize_stream_file(stream_file, config) == LocalizedShard(
moment=datetime(2026, 1, 31, 21, 0, 0, 0),
markers=["A", "B"],
tags=[],
start_line=1,
end_line=1,
children=[],
location={"file": stream_file.file_name, "project": "a"},
)
def test_overwrites_true_non_propagated_dimension_overwrites_private_value(self):
config = RepositoryConfiguration(
dimensions={
"label": Dimension(display_name="Label", propagate=False),
},
markers={
"A": Marker(
display_name="A",
placements=[MarkerPlacement(dimension="label", value="a")],
),
"B": Marker(
display_name="B",
placements=[
MarkerPlacement(dimension="label", value="b", overwrites=True)
],
),
},
)
stream_file = StreamFile(
file_name="20260131-210000 Test File.md",
shard=Shard(start_line=1, end_line=1, markers=["A", "B"]),
)
assert localize_stream_file(stream_file, config) == LocalizedShard(
moment=datetime(2026, 1, 31, 21, 0, 0, 0),
markers=["A", "B"],
tags=[],
start_line=1,
end_line=1,
children=[],
location={"file": stream_file.file_name, "label": "b"},
)
def test_overwrites_false_non_propagated_dimension_does_not_overwrite_private_value(
self,
):
config = RepositoryConfiguration(
dimensions={
"label": Dimension(display_name="Label", propagate=False),
},
markers={
"A": Marker(
display_name="A",
placements=[
MarkerPlacement(dimension="label", value="a", overwrites=True)
],
),
"B": Marker(
display_name="B",
placements=[
MarkerPlacement(dimension="label", value="b", overwrites=False)
],
),
},
)
stream_file = StreamFile(
file_name="20260131-210000 Test File.md",
shard=Shard(start_line=1, end_line=1, markers=["A", "B"]),
)
assert localize_stream_file(stream_file, config) == LocalizedShard(
moment=datetime(2026, 1, 31, 21, 0, 0, 0),
markers=["A", "B"],
tags=[],
start_line=1,
end_line=1,
children=[],
location={"file": stream_file.file_name, "label": "a"},
)

View file

@ -1,288 +0,0 @@
from __future__ import annotations
from datetime import datetime, time
import pytest
from streamd.localize.localized_shard import LocalizedShard
from streamd.timesheet.configuration import (
TIMESHEET_DIMENSION_NAME,
TimesheetPointType,
)
from streamd.timesheet.extract import extract_timesheets
from streamd.timesheet.timecard import SpecialDayType, Timecard, Timesheet
def point(at: datetime, type: TimesheetPointType) -> LocalizedShard:
"""
Create a minimal LocalizedShard that will be interpreted as a timesheet point.
Note: The extract pipeline uses set-dimension filtering; we therefore ensure the
timesheet dimension is set in `location`.
"""
return LocalizedShard(
moment=at,
markers=["Timesheet"],
tags=[],
start_line=1,
end_line=1,
children=[],
location={TIMESHEET_DIMENSION_NAME: type.value, "file": "dummy.md"},
)
class TestExtractTimesheets:
def test_single_work_block(self):
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=9, minute=0), TimesheetPointType.Card),
point(day.replace(hour=17, minute=30), TimesheetPointType.Break),
]
assert extract_timesheets(shards) == [
Timesheet(
date=day.date(),
is_sick_leave=False,
special_day_type=None,
timecards=[Timecard(from_time=time(9, 0), to_time=time(17, 30))],
)
]
def test_three_work_blocks_separated_by_breaks(self):
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=7, minute=15), TimesheetPointType.Card),
point(day.replace(hour=12, minute=0), TimesheetPointType.Break),
point(day.replace(hour=12, minute=45), TimesheetPointType.Card),
point(day.replace(hour=15, minute=0), TimesheetPointType.Break),
point(day.replace(hour=16, minute=0), TimesheetPointType.Card),
point(day.replace(hour=17, minute=0), TimesheetPointType.Break),
]
assert extract_timesheets(shards) == [
Timesheet(
date=day.date(),
is_sick_leave=False,
special_day_type=None,
timecards=[
Timecard(from_time=time(7, 15), to_time=time(12, 0)),
Timecard(from_time=time(12, 45), to_time=time(15, 0)),
Timecard(from_time=time(16, 0), to_time=time(17, 0)),
],
)
]
def test_input_order_is_not_required_within_a_day(self):
"""
Points may come unsorted; extraction should sort by timestamp within a day.
"""
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=15, minute=0), TimesheetPointType.Break),
point(day.replace(hour=7, minute=15), TimesheetPointType.Card),
point(day.replace(hour=12, minute=0), TimesheetPointType.Break),
point(day.replace(hour=12, minute=45), TimesheetPointType.Card),
point(day.replace(hour=17, minute=0), TimesheetPointType.Break),
point(day.replace(hour=16, minute=0), TimesheetPointType.Card),
]
assert extract_timesheets(shards) == [
Timesheet(
date=day.date(),
is_sick_leave=False,
special_day_type=None,
timecards=[
Timecard(from_time=time(7, 15), to_time=time(12, 0)),
Timecard(from_time=time(12, 45), to_time=time(15, 0)),
Timecard(from_time=time(16, 0), to_time=time(17, 0)),
],
)
]
def test_groups_by_day(self):
"""
If points span multiple days, we should get one Timesheet per day.
"""
day1 = datetime(2026, 2, 1, 0, 0, 0)
day2 = datetime(2026, 2, 2, 0, 0, 0)
shards = [
point(day2.replace(hour=10, minute=0), TimesheetPointType.Card),
point(day2.replace(hour=18, minute=0), TimesheetPointType.Break),
point(day1.replace(hour=9, minute=0), TimesheetPointType.Card),
point(day1.replace(hour=17, minute=0), TimesheetPointType.Break),
]
# Note: current implementation groups by date using `itertools.groupby` on the
# incoming order; to be robust, we pass day1 points first, then day2 points.
# This asserts the intended behavior.
shards = [
point(day1.replace(hour=9, minute=0), TimesheetPointType.Card),
point(day1.replace(hour=17, minute=0), TimesheetPointType.Break),
point(day2.replace(hour=10, minute=0), TimesheetPointType.Card),
point(day2.replace(hour=18, minute=0), TimesheetPointType.Break),
]
assert extract_timesheets(shards) == [
Timesheet(
date=day1.date(),
is_sick_leave=False,
special_day_type=None,
timecards=[Timecard(from_time=time(9, 0), to_time=time(17, 0))],
),
Timesheet(
date=day2.date(),
is_sick_leave=False,
special_day_type=None,
timecards=[Timecard(from_time=time(10, 0), to_time=time(18, 0))],
),
]
def test_day_with_only_special_day_type_vacation(self):
"""
A day can be marked as Vacation without timecards; it should still produce a Timesheet.
"""
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=8, minute=0), TimesheetPointType.Vacation),
point(day.replace(hour=9, minute=0), TimesheetPointType.Break),
]
assert extract_timesheets(shards) == [
Timesheet(
date=day.date(),
is_sick_leave=False,
special_day_type=SpecialDayType.Vacation,
timecards=[],
)
]
def test_day_with_only_special_day_type_holiday(self):
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=8, minute=0), TimesheetPointType.Holiday),
point(day.replace(hour=9, minute=0), TimesheetPointType.Break),
]
assert extract_timesheets(shards) == [
Timesheet(
date=day.date(),
is_sick_leave=False,
special_day_type=SpecialDayType.Holiday,
timecards=[],
)
]
def test_day_with_only_special_day_type_undertime(self):
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=8, minute=0), TimesheetPointType.Undertime),
point(day.replace(hour=9, minute=0), TimesheetPointType.Break),
]
assert extract_timesheets(shards) == [
Timesheet(
date=day.date(),
is_sick_leave=False,
special_day_type=SpecialDayType.Undertime,
timecards=[],
)
]
def test_day_with_sick_leave_and_timecards(self):
"""
SickLeave should set the flag but not prevent timecard aggregation.
"""
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=7, minute=30), TimesheetPointType.SickLeave),
point(day.replace(hour=9, minute=0), TimesheetPointType.Card),
point(day.replace(hour=12, minute=0), TimesheetPointType.Break),
]
assert extract_timesheets(shards) == [
Timesheet(
date=day.date(),
is_sick_leave=True,
special_day_type=None,
timecards=[Timecard(from_time=time(9, 0), to_time=time(12, 0))],
)
]
def test_day_with_sick_leave_only(self):
"""
A day with only SickLeave should still produce a Timesheet (no timecards).
"""
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=8, minute=0), TimesheetPointType.SickLeave),
point(day.replace(hour=9, minute=0), TimesheetPointType.Break),
]
assert extract_timesheets(shards) == [
Timesheet(
date=day.date(),
is_sick_leave=True,
special_day_type=None,
timecards=[],
)
]
def test_empty_input(self):
assert extract_timesheets([]) == []
def test_day_with_only_cards_and_no_break_is_invalid(self):
"""
A day ending 'in work' (last point not a Break) should raise.
"""
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=9, minute=0), TimesheetPointType.Card),
point(day.replace(hour=12, minute=0), TimesheetPointType.Card),
]
with pytest.raises(ValueError, match=r"Last Timecard of .* is not a break"):
_ = extract_timesheets(shards)
def test_two_special_day_types_same_day_is_invalid(self):
"""
A day cannot be both Vacation and Holiday (or any two distinct special types).
"""
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=8, minute=0), TimesheetPointType.Vacation),
point(day.replace(hour=8, minute=5), TimesheetPointType.Holiday),
point(day.replace(hour=9, minute=0), TimesheetPointType.Break),
]
with pytest.raises(ValueError, match=r"is both .* and .*"):
_ = extract_timesheets(shards)
def test_points_with_mixed_dates_inside_one_group_raises(self):
"""
Defensive: if aggregation receives points spanning multiple dates for a single day,
it should raise. (This can occur if higher-level grouping is incorrect.)
"""
day1 = datetime(2026, 2, 1, 0, 0, 0)
day2 = datetime(2026, 2, 2, 0, 0, 0)
shards = [
point(day1.replace(hour=9, minute=0), TimesheetPointType.Card),
point(day2.replace(hour=9, minute=30), TimesheetPointType.Break),
]
with pytest.raises(ValueError, match=r"Last Timecard of .* is not a break"):
_ = extract_timesheets(shards)
def test_day_with_only_breaks_is_ignored(self):
"""
A day with no timecards and no sick/special markers should not emit a Timesheet.
"""
day = datetime(2026, 2, 1, 0, 0, 0)
shards = [
point(day.replace(hour=12, minute=0), TimesheetPointType.Break),
point(day.replace(hour=13, minute=0), TimesheetPointType.Break),
]
assert extract_timesheets(shards) == []

438
uv.lock generated
View file

@ -1,438 +0,0 @@
version = 1
revision = 3
requires-python = ">=3.13"
[[package]]
name = "annotated-doc"
version = "0.0.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/57/ba/046ceea27344560984e26a590f90bc7f4a75b06701f653222458922b558c/annotated_doc-0.0.4.tar.gz", hash = "sha256:fbcda96e87e9c92ad167c2e53839e57503ecfda18804ea28102353485033faa4", size = 7288, upload-time = "2025-11-10T22:07:42.062Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1e/d3/26bf1008eb3d2daa8ef4cacc7f3bfdc11818d111f7e2d0201bc6e3b49d45/annotated_doc-0.0.4-py3-none-any.whl", hash = "sha256:571ac1dc6991c450b25a9c2d84a3705e2ae7a53467b5d111c24fa8baabbed320", size = 5303, upload-time = "2025-11-10T22:07:40.673Z" },
]
[[package]]
name = "annotated-types"
version = "0.7.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081, upload-time = "2024-05-20T21:33:25.928Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643, upload-time = "2024-05-20T21:33:24.1Z" },
]
[[package]]
name = "basedpyright"
version = "1.38.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "nodejs-wheel-binaries" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a2/d4/4ac6eeba6cfe2ad8586dcf87fdb9e8b045aa467b559bc2e24e91e84f58b2/basedpyright-1.38.0.tar.gz", hash = "sha256:7a9cf631d7eaf5859022a4352b51ed0e78ce115435a8599402239804000d0cdf", size = 25257385, upload-time = "2026-02-11T16:05:47.834Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d5/90/1883cec16d667d944b08e8d8909b9b2f46cc1d2b9731e855e3c71f9b0450/basedpyright-1.38.0-py3-none-any.whl", hash = "sha256:a6c11a343fd12a2152a0d721b0e92f54f2e2e3322ee2562197e27dad952f1a61", size = 12303557, upload-time = "2026-02-11T16:05:44.863Z" },
]
[[package]]
name = "click"
version = "8.3.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3d/fa/656b739db8587d7b5dfa22e22ed02566950fbfbcdc20311993483657a5c0/click-8.3.1.tar.gz", hash = "sha256:12ff4785d337a1bb490bb7e9c2b1ee5da3112e94a8622f26a6c77f5d2fc6842a", size = 295065, upload-time = "2025-11-15T20:45:42.706Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/98/78/01c019cdb5d6498122777c1a43056ebb3ebfeef2076d9d026bfe15583b2b/click-8.3.1-py3-none-any.whl", hash = "sha256:981153a64e25f12d547d3426c367a4857371575ee7ad18df2a6183ab0545b2a6", size = 108274, upload-time = "2025-11-15T20:45:41.139Z" },
]
[[package]]
name = "colorama"
version = "0.4.6"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
]
[[package]]
name = "faker"
version = "40.4.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "tzdata", marker = "sys_platform == 'win32'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/fc/7e/dccb7013c9f3d66f2e379383600629fec75e4da2698548bdbf2041ea4b51/faker-40.4.0.tar.gz", hash = "sha256:76f8e74a3df28c3e2ec2caafa956e19e37a132fdc7ea067bc41783affcfee364", size = 1952221, upload-time = "2026-02-06T23:30:15.515Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ac/63/58efa67c10fb27810d34351b7a10f85f109a7f7e2a07dc3773952459c47b/faker-40.4.0-py3-none-any.whl", hash = "sha256:486d43c67ebbb136bc932406418744f9a0bdf2c07f77703ea78b58b77e9aa443", size = 1987060, upload-time = "2026-02-06T23:30:13.44Z" },
]
[[package]]
name = "iniconfig"
version = "2.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
]
[[package]]
name = "markdown-it-py"
version = "4.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "mdurl" },
]
sdist = { url = "https://files.pythonhosted.org/packages/5b/f5/4ec618ed16cc4f8fb3b701563655a69816155e79e24a17b651541804721d/markdown_it_py-4.0.0.tar.gz", hash = "sha256:cb0a2b4aa34f932c007117b194e945bd74e0ec24133ceb5bac59009cda1cb9f3", size = 73070, upload-time = "2025-08-11T12:57:52.854Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/94/54/e7d793b573f298e1c9013b8c4dade17d481164aa517d1d7148619c2cedbf/markdown_it_py-4.0.0-py3-none-any.whl", hash = "sha256:87327c59b172c5011896038353a81343b6754500a08cd7a4973bb48c6d578147", size = 87321, upload-time = "2025-08-11T12:57:51.923Z" },
]
[[package]]
name = "mdurl"
version = "0.1.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d6/54/cfe61301667036ec958cb99bd3efefba235e65cdeb9c84d24a8293ba1d90/mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba", size = 8729, upload-time = "2022-08-14T12:40:10.846Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979, upload-time = "2022-08-14T12:40:09.779Z" },
]
[[package]]
name = "mistletoe"
version = "1.5.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/31/ae/d33647e2a26a8899224f36afc5e7b7a670af30f1fd87231e9f07ca19d673/mistletoe-1.5.1.tar.gz", hash = "sha256:c5571ce6ca9cfdc7ce9151c3ae79acb418e067812000907616427197648030a3", size = 111769, upload-time = "2025-12-07T16:19:01.066Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/20/60/0980fefdc4d12c18c1bbab9d62852f27aded8839233c7b0a9827aaf395f5/mistletoe-1.5.1-py3-none-any.whl", hash = "sha256:d3e97664798261503f685f6a6281b092628367cf3128fc68a015a993b0c4feb3", size = 55331, upload-time = "2025-12-07T16:18:59.65Z" },
]
[[package]]
name = "nodejs-wheel-binaries"
version = "24.13.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/e5/d0/81d98b8fddc45332f79d6ad5749b1c7409fb18723545eae75d9b7e0048fb/nodejs_wheel_binaries-24.13.1.tar.gz", hash = "sha256:512659a67449a038231e2e972d49e77049d2cf789ae27db39eff4ab1ca52ac57", size = 8056, upload-time = "2026-02-12T17:31:04.368Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/aa/04/1ffe1838306654fcb50bcf46172567d50c8e27a76f4b9e55a1971fab5c4f/nodejs_wheel_binaries-24.13.1-py2.py3-none-macosx_13_0_arm64.whl", hash = "sha256:360ac9382c651de294c23c4933a02358c4e11331294983f3cf50ca1ac32666b1", size = 54757440, upload-time = "2026-02-12T17:30:35.748Z" },
{ url = "https://files.pythonhosted.org/packages/66/f6/81ad81bc3bd919a20b110130c4fd318c7b6a5abb37eb53daa353ad908012/nodejs_wheel_binaries-24.13.1-py2.py3-none-macosx_13_0_x86_64.whl", hash = "sha256:035b718946793986762cdd50deee7f5f1a8f1b0bad0f0cfd57cad5492f5ea018", size = 54932957, upload-time = "2026-02-12T17:30:40.114Z" },
{ url = "https://files.pythonhosted.org/packages/14/be/8e8a2bd50953c4c5b7e0fca07368d287917b84054dc3c93dd26a2940f0f9/nodejs_wheel_binaries-24.13.1-py2.py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:f795e9238438c4225f76fbd01e2b8e1a322116bbd0dc15a7dbd585a3ad97961e", size = 59287257, upload-time = "2026-02-12T17:30:43.781Z" },
{ url = "https://files.pythonhosted.org/packages/58/57/92f6dfa40647702a9fa6d32393ce4595d0fc03c1daa9b245df66cc60e959/nodejs_wheel_binaries-24.13.1-py2.py3-none-manylinux_2_28_x86_64.whl", hash = "sha256:978328e3ad522571eb163b042dfbd7518187a13968fe372738f90fdfe8a46afc", size = 59781783, upload-time = "2026-02-12T17:30:47.387Z" },
{ url = "https://files.pythonhosted.org/packages/f7/a5/457b984cf675cf86ace7903204b9c36edf7a2d1b4325ddf71eaf8d1027c7/nodejs_wheel_binaries-24.13.1-py2.py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e1dc893df85299420cd2a5feea0c3f8482a719b5f7f82d5977d58718b8b78b5f", size = 61287166, upload-time = "2026-02-12T17:30:50.646Z" },
{ url = "https://files.pythonhosted.org/packages/3c/99/da515f7bc3bce35cfa6005f0e0c4e3c4042a466782b143112eb393b663be/nodejs_wheel_binaries-24.13.1-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:0e581ae219a39073dcadd398a2eb648f0707b0f5d68c565586139f919c91cbe9", size = 61870142, upload-time = "2026-02-12T17:30:54.563Z" },
{ url = "https://files.pythonhosted.org/packages/cc/c0/22001d2c96d8200834af7d1de5e72daa3266c7270330275104c3d9ddd143/nodejs_wheel_binaries-24.13.1-py2.py3-none-win_amd64.whl", hash = "sha256:d4c969ea0bcb8c8b20bc6a7b4ad2796146d820278f17d4dc20229b088c833e22", size = 41185473, upload-time = "2026-02-12T17:30:57.524Z" },
{ url = "https://files.pythonhosted.org/packages/ab/c4/7532325f968ecfc078e8a028e69a52e4c3f95fb800906bf6931ac1e89e2b/nodejs_wheel_binaries-24.13.1-py2.py3-none-win_arm64.whl", hash = "sha256:caec398cb9e94c560bacdcba56b3828df22a355749eb291f47431af88cbf26dc", size = 38881194, upload-time = "2026-02-12T17:31:00.214Z" },
]
[[package]]
name = "packaging"
version = "26.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/65/ee/299d360cdc32edc7d2cf530f3accf79c4fca01e96ffc950d8a52213bd8e4/packaging-26.0.tar.gz", hash = "sha256:00243ae351a257117b6a241061796684b084ed1c516a08c48a3f7e147a9d80b4", size = 143416, upload-time = "2026-01-21T20:50:39.064Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/b9/c538f279a4e237a006a2c98387d081e9eb060d203d8ed34467cc0f0b9b53/packaging-26.0-py3-none-any.whl", hash = "sha256:b36f1fef9334a5588b4166f8bcd26a14e521f2b55e6b9de3aaa80d3ff7a37529", size = 74366, upload-time = "2026-01-21T20:50:37.788Z" },
]
[[package]]
name = "pluggy"
version = "1.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
]
[[package]]
name = "pydantic"
version = "2.12.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-types" },
{ name = "pydantic-core" },
{ name = "typing-extensions" },
{ name = "typing-inspection" },
]
sdist = { url = "https://files.pythonhosted.org/packages/69/44/36f1a6e523abc58ae5f928898e4aca2e0ea509b5aa6f6f392a5d882be928/pydantic-2.12.5.tar.gz", hash = "sha256:4d351024c75c0f085a9febbb665ce8c0c6ec5d30e903bdb6394b7ede26aebb49", size = 821591, upload-time = "2025-11-26T15:11:46.471Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/5a/87/b70ad306ebb6f9b585f114d0ac2137d792b48be34d732d60e597c2f8465a/pydantic-2.12.5-py3-none-any.whl", hash = "sha256:e561593fccf61e8a20fc46dfc2dfe075b8be7d0188df33f221ad1f0139180f9d", size = 463580, upload-time = "2025-11-26T15:11:44.605Z" },
]
[[package]]
name = "pydantic-core"
version = "2.41.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/71/70/23b021c950c2addd24ec408e9ab05d59b035b39d97cdc1130e1bce647bb6/pydantic_core-2.41.5.tar.gz", hash = "sha256:08daa51ea16ad373ffd5e7606252cc32f07bc72b28284b6bc9c6df804816476e", size = 460952, upload-time = "2025-11-04T13:43:49.098Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/87/06/8806241ff1f70d9939f9af039c6c35f2360cf16e93c2ca76f184e76b1564/pydantic_core-2.41.5-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:941103c9be18ac8daf7b7adca8228f8ed6bb7a1849020f643b3a14d15b1924d9", size = 2120403, upload-time = "2025-11-04T13:40:25.248Z" },
{ url = "https://files.pythonhosted.org/packages/94/02/abfa0e0bda67faa65fef1c84971c7e45928e108fe24333c81f3bfe35d5f5/pydantic_core-2.41.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:112e305c3314f40c93998e567879e887a3160bb8689ef3d2c04b6cc62c33ac34", size = 1896206, upload-time = "2025-11-04T13:40:27.099Z" },
{ url = "https://files.pythonhosted.org/packages/15/df/a4c740c0943e93e6500f9eb23f4ca7ec9bf71b19e608ae5b579678c8d02f/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0cbaad15cb0c90aa221d43c00e77bb33c93e8d36e0bf74760cd00e732d10a6a0", size = 1919307, upload-time = "2025-11-04T13:40:29.806Z" },
{ url = "https://files.pythonhosted.org/packages/9a/e3/6324802931ae1d123528988e0e86587c2072ac2e5394b4bc2bc34b61ff6e/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:03ca43e12fab6023fc79d28ca6b39b05f794ad08ec2feccc59a339b02f2b3d33", size = 2063258, upload-time = "2025-11-04T13:40:33.544Z" },
{ url = "https://files.pythonhosted.org/packages/c9/d4/2230d7151d4957dd79c3044ea26346c148c98fbf0ee6ebd41056f2d62ab5/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dc799088c08fa04e43144b164feb0c13f9a0bc40503f8df3e9fde58a3c0c101e", size = 2214917, upload-time = "2025-11-04T13:40:35.479Z" },
{ url = "https://files.pythonhosted.org/packages/e6/9f/eaac5df17a3672fef0081b6c1bb0b82b33ee89aa5cec0d7b05f52fd4a1fa/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:97aeba56665b4c3235a0e52b2c2f5ae9cd071b8a8310ad27bddb3f7fb30e9aa2", size = 2332186, upload-time = "2025-11-04T13:40:37.436Z" },
{ url = "https://files.pythonhosted.org/packages/cf/4e/35a80cae583a37cf15604b44240e45c05e04e86f9cfd766623149297e971/pydantic_core-2.41.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:406bf18d345822d6c21366031003612b9c77b3e29ffdb0f612367352aab7d586", size = 2073164, upload-time = "2025-11-04T13:40:40.289Z" },
{ url = "https://files.pythonhosted.org/packages/bf/e3/f6e262673c6140dd3305d144d032f7bd5f7497d3871c1428521f19f9efa2/pydantic_core-2.41.5-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b93590ae81f7010dbe380cdeab6f515902ebcbefe0b9327cc4804d74e93ae69d", size = 2179146, upload-time = "2025-11-04T13:40:42.809Z" },
{ url = "https://files.pythonhosted.org/packages/75/c7/20bd7fc05f0c6ea2056a4565c6f36f8968c0924f19b7d97bbfea55780e73/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:01a3d0ab748ee531f4ea6c3e48ad9dac84ddba4b0d82291f87248f2f9de8d740", size = 2137788, upload-time = "2025-11-04T13:40:44.752Z" },
{ url = "https://files.pythonhosted.org/packages/3a/8d/34318ef985c45196e004bc46c6eab2eda437e744c124ef0dbe1ff2c9d06b/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:6561e94ba9dacc9c61bce40e2d6bdc3bfaa0259d3ff36ace3b1e6901936d2e3e", size = 2340133, upload-time = "2025-11-04T13:40:46.66Z" },
{ url = "https://files.pythonhosted.org/packages/9c/59/013626bf8c78a5a5d9350d12e7697d3d4de951a75565496abd40ccd46bee/pydantic_core-2.41.5-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:915c3d10f81bec3a74fbd4faebe8391013ba61e5a1a8d48c4455b923bdda7858", size = 2324852, upload-time = "2025-11-04T13:40:48.575Z" },
{ url = "https://files.pythonhosted.org/packages/1a/d9/c248c103856f807ef70c18a4f986693a46a8ffe1602e5d361485da502d20/pydantic_core-2.41.5-cp313-cp313-win32.whl", hash = "sha256:650ae77860b45cfa6e2cdafc42618ceafab3a2d9a3811fcfbd3bbf8ac3c40d36", size = 1994679, upload-time = "2025-11-04T13:40:50.619Z" },
{ url = "https://files.pythonhosted.org/packages/9e/8b/341991b158ddab181cff136acd2552c9f35bd30380422a639c0671e99a91/pydantic_core-2.41.5-cp313-cp313-win_amd64.whl", hash = "sha256:79ec52ec461e99e13791ec6508c722742ad745571f234ea6255bed38c6480f11", size = 2019766, upload-time = "2025-11-04T13:40:52.631Z" },
{ url = "https://files.pythonhosted.org/packages/73/7d/f2f9db34af103bea3e09735bb40b021788a5e834c81eedb541991badf8f5/pydantic_core-2.41.5-cp313-cp313-win_arm64.whl", hash = "sha256:3f84d5c1b4ab906093bdc1ff10484838aca54ef08de4afa9de0f5f14d69639cd", size = 1981005, upload-time = "2025-11-04T13:40:54.734Z" },
{ url = "https://files.pythonhosted.org/packages/ea/28/46b7c5c9635ae96ea0fbb779e271a38129df2550f763937659ee6c5dbc65/pydantic_core-2.41.5-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:3f37a19d7ebcdd20b96485056ba9e8b304e27d9904d233d7b1015db320e51f0a", size = 2119622, upload-time = "2025-11-04T13:40:56.68Z" },
{ url = "https://files.pythonhosted.org/packages/74/1a/145646e5687e8d9a1e8d09acb278c8535ebe9e972e1f162ed338a622f193/pydantic_core-2.41.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:1d1d9764366c73f996edd17abb6d9d7649a7eb690006ab6adbda117717099b14", size = 1891725, upload-time = "2025-11-04T13:40:58.807Z" },
{ url = "https://files.pythonhosted.org/packages/23/04/e89c29e267b8060b40dca97bfc64a19b2a3cf99018167ea1677d96368273/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25e1c2af0fce638d5f1988b686f3b3ea8cd7de5f244ca147c777769e798a9cd1", size = 1915040, upload-time = "2025-11-04T13:41:00.853Z" },
{ url = "https://files.pythonhosted.org/packages/84/a3/15a82ac7bd97992a82257f777b3583d3e84bdb06ba6858f745daa2ec8a85/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:506d766a8727beef16b7adaeb8ee6217c64fc813646b424d0804d67c16eddb66", size = 2063691, upload-time = "2025-11-04T13:41:03.504Z" },
{ url = "https://files.pythonhosted.org/packages/74/9b/0046701313c6ef08c0c1cf0e028c67c770a4e1275ca73131563c5f2a310a/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4819fa52133c9aa3c387b3328f25c1facc356491e6135b459f1de698ff64d869", size = 2213897, upload-time = "2025-11-04T13:41:05.804Z" },
{ url = "https://files.pythonhosted.org/packages/8a/cd/6bac76ecd1b27e75a95ca3a9a559c643b3afcd2dd62086d4b7a32a18b169/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2b761d210c9ea91feda40d25b4efe82a1707da2ef62901466a42492c028553a2", size = 2333302, upload-time = "2025-11-04T13:41:07.809Z" },
{ url = "https://files.pythonhosted.org/packages/4c/d2/ef2074dc020dd6e109611a8be4449b98cd25e1b9b8a303c2f0fca2f2bcf7/pydantic_core-2.41.5-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:22f0fb8c1c583a3b6f24df2470833b40207e907b90c928cc8d3594b76f874375", size = 2064877, upload-time = "2025-11-04T13:41:09.827Z" },
{ url = "https://files.pythonhosted.org/packages/18/66/e9db17a9a763d72f03de903883c057b2592c09509ccfe468187f2a2eef29/pydantic_core-2.41.5-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2782c870e99878c634505236d81e5443092fba820f0373997ff75f90f68cd553", size = 2180680, upload-time = "2025-11-04T13:41:12.379Z" },
{ url = "https://files.pythonhosted.org/packages/d3/9e/3ce66cebb929f3ced22be85d4c2399b8e85b622db77dad36b73c5387f8f8/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:0177272f88ab8312479336e1d777f6b124537d47f2123f89cb37e0accea97f90", size = 2138960, upload-time = "2025-11-04T13:41:14.627Z" },
{ url = "https://files.pythonhosted.org/packages/a6/62/205a998f4327d2079326b01abee48e502ea739d174f0a89295c481a2272e/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:63510af5e38f8955b8ee5687740d6ebf7c2a0886d15a6d65c32814613681bc07", size = 2339102, upload-time = "2025-11-04T13:41:16.868Z" },
{ url = "https://files.pythonhosted.org/packages/3c/0d/f05e79471e889d74d3d88f5bd20d0ed189ad94c2423d81ff8d0000aab4ff/pydantic_core-2.41.5-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:e56ba91f47764cc14f1daacd723e3e82d1a89d783f0f5afe9c364b8bb491ccdb", size = 2326039, upload-time = "2025-11-04T13:41:18.934Z" },
{ url = "https://files.pythonhosted.org/packages/ec/e1/e08a6208bb100da7e0c4b288eed624a703f4d129bde2da475721a80cab32/pydantic_core-2.41.5-cp314-cp314-win32.whl", hash = "sha256:aec5cf2fd867b4ff45b9959f8b20ea3993fc93e63c7363fe6851424c8a7e7c23", size = 1995126, upload-time = "2025-11-04T13:41:21.418Z" },
{ url = "https://files.pythonhosted.org/packages/48/5d/56ba7b24e9557f99c9237e29f5c09913c81eeb2f3217e40e922353668092/pydantic_core-2.41.5-cp314-cp314-win_amd64.whl", hash = "sha256:8e7c86f27c585ef37c35e56a96363ab8de4e549a95512445b85c96d3e2f7c1bf", size = 2015489, upload-time = "2025-11-04T13:41:24.076Z" },
{ url = "https://files.pythonhosted.org/packages/4e/bb/f7a190991ec9e3e0ba22e4993d8755bbc4a32925c0b5b42775c03e8148f9/pydantic_core-2.41.5-cp314-cp314-win_arm64.whl", hash = "sha256:e672ba74fbc2dc8eea59fb6d4aed6845e6905fc2a8afe93175d94a83ba2a01a0", size = 1977288, upload-time = "2025-11-04T13:41:26.33Z" },
{ url = "https://files.pythonhosted.org/packages/92/ed/77542d0c51538e32e15afe7899d79efce4b81eee631d99850edc2f5e9349/pydantic_core-2.41.5-cp314-cp314t-macosx_10_12_x86_64.whl", hash = "sha256:8566def80554c3faa0e65ac30ab0932b9e3a5cd7f8323764303d468e5c37595a", size = 2120255, upload-time = "2025-11-04T13:41:28.569Z" },
{ url = "https://files.pythonhosted.org/packages/bb/3d/6913dde84d5be21e284439676168b28d8bbba5600d838b9dca99de0fad71/pydantic_core-2.41.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:b80aa5095cd3109962a298ce14110ae16b8c1aece8b72f9dafe81cf597ad80b3", size = 1863760, upload-time = "2025-11-04T13:41:31.055Z" },
{ url = "https://files.pythonhosted.org/packages/5a/f0/e5e6b99d4191da102f2b0eb9687aaa7f5bea5d9964071a84effc3e40f997/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3006c3dd9ba34b0c094c544c6006cc79e87d8612999f1a5d43b769b89181f23c", size = 1878092, upload-time = "2025-11-04T13:41:33.21Z" },
{ url = "https://files.pythonhosted.org/packages/71/48/36fb760642d568925953bcc8116455513d6e34c4beaa37544118c36aba6d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:72f6c8b11857a856bcfa48c86f5368439f74453563f951e473514579d44aa612", size = 2053385, upload-time = "2025-11-04T13:41:35.508Z" },
{ url = "https://files.pythonhosted.org/packages/20/25/92dc684dd8eb75a234bc1c764b4210cf2646479d54b47bf46061657292a8/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5cb1b2f9742240e4bb26b652a5aeb840aa4b417c7748b6f8387927bc6e45e40d", size = 2218832, upload-time = "2025-11-04T13:41:37.732Z" },
{ url = "https://files.pythonhosted.org/packages/e2/09/f53e0b05023d3e30357d82eb35835d0f6340ca344720a4599cd663dca599/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bd3d54f38609ff308209bd43acea66061494157703364ae40c951f83ba99a1a9", size = 2327585, upload-time = "2025-11-04T13:41:40Z" },
{ url = "https://files.pythonhosted.org/packages/aa/4e/2ae1aa85d6af35a39b236b1b1641de73f5a6ac4d5a7509f77b814885760c/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ff4321e56e879ee8d2a879501c8e469414d948f4aba74a2d4593184eb326660", size = 2041078, upload-time = "2025-11-04T13:41:42.323Z" },
{ url = "https://files.pythonhosted.org/packages/cd/13/2e215f17f0ef326fc72afe94776edb77525142c693767fc347ed6288728d/pydantic_core-2.41.5-cp314-cp314t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d0d2568a8c11bf8225044aa94409e21da0cb09dcdafe9ecd10250b2baad531a9", size = 2173914, upload-time = "2025-11-04T13:41:45.221Z" },
{ url = "https://files.pythonhosted.org/packages/02/7a/f999a6dcbcd0e5660bc348a3991c8915ce6599f4f2c6ac22f01d7a10816c/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_aarch64.whl", hash = "sha256:a39455728aabd58ceabb03c90e12f71fd30fa69615760a075b9fec596456ccc3", size = 2129560, upload-time = "2025-11-04T13:41:47.474Z" },
{ url = "https://files.pythonhosted.org/packages/3a/b1/6c990ac65e3b4c079a4fb9f5b05f5b013afa0f4ed6780a3dd236d2cbdc64/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_armv7l.whl", hash = "sha256:239edca560d05757817c13dc17c50766136d21f7cd0fac50295499ae24f90fdf", size = 2329244, upload-time = "2025-11-04T13:41:49.992Z" },
{ url = "https://files.pythonhosted.org/packages/d9/02/3c562f3a51afd4d88fff8dffb1771b30cfdfd79befd9883ee094f5b6c0d8/pydantic_core-2.41.5-cp314-cp314t-musllinux_1_1_x86_64.whl", hash = "sha256:2a5e06546e19f24c6a96a129142a75cee553cc018ffee48a460059b1185f4470", size = 2331955, upload-time = "2025-11-04T13:41:54.079Z" },
{ url = "https://files.pythonhosted.org/packages/5c/96/5fb7d8c3c17bc8c62fdb031c47d77a1af698f1d7a406b0f79aaa1338f9ad/pydantic_core-2.41.5-cp314-cp314t-win32.whl", hash = "sha256:b4ececa40ac28afa90871c2cc2b9ffd2ff0bf749380fbdf57d165fd23da353aa", size = 1988906, upload-time = "2025-11-04T13:41:56.606Z" },
{ url = "https://files.pythonhosted.org/packages/22/ed/182129d83032702912c2e2d8bbe33c036f342cc735737064668585dac28f/pydantic_core-2.41.5-cp314-cp314t-win_amd64.whl", hash = "sha256:80aa89cad80b32a912a65332f64a4450ed00966111b6615ca6816153d3585a8c", size = 1981607, upload-time = "2025-11-04T13:41:58.889Z" },
{ url = "https://files.pythonhosted.org/packages/9f/ed/068e41660b832bb0b1aa5b58011dea2a3fe0ba7861ff38c4d4904c1c1a99/pydantic_core-2.41.5-cp314-cp314t-win_arm64.whl", hash = "sha256:35b44f37a3199f771c3eaa53051bc8a70cd7b54f333531c59e29fd4db5d15008", size = 1974769, upload-time = "2025-11-04T13:42:01.186Z" },
]
[[package]]
name = "pydantic-settings"
version = "2.12.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pydantic" },
{ name = "python-dotenv" },
{ name = "typing-inspection" },
]
sdist = { url = "https://files.pythonhosted.org/packages/43/4b/ac7e0aae12027748076d72a8764ff1c9d82ca75a7a52622e67ed3f765c54/pydantic_settings-2.12.0.tar.gz", hash = "sha256:005538ef951e3c2a68e1c08b292b5f2e71490def8589d4221b95dab00dafcfd0", size = 194184, upload-time = "2025-11-10T14:25:47.013Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c1/60/5d4751ba3f4a40a6891f24eec885f51afd78d208498268c734e256fb13c4/pydantic_settings-2.12.0-py3-none-any.whl", hash = "sha256:fddb9fd99a5b18da837b29710391e945b1e30c135477f484084ee513adb93809", size = 51880, upload-time = "2025-11-10T14:25:45.546Z" },
]
[package.optional-dependencies]
yaml = [
{ name = "pyyaml" },
]
[[package]]
name = "pygments"
version = "2.19.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
]
[[package]]
name = "pytest"
version = "9.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "iniconfig" },
{ name = "packaging" },
{ name = "pluggy" },
{ name = "pygments" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
]
[[package]]
name = "python-dotenv"
version = "1.2.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f0/26/19cadc79a718c5edbec86fd4919a6b6d3f681039a2f6d66d14be94e75fb9/python_dotenv-1.2.1.tar.gz", hash = "sha256:42667e897e16ab0d66954af0e60a9caa94f0fd4ecf3aaf6d2d260eec1aa36ad6", size = 44221, upload-time = "2025-10-26T15:12:10.434Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/14/1b/a298b06749107c305e1fe0f814c6c74aea7b2f1e10989cb30f544a1b3253/python_dotenv-1.2.1-py3-none-any.whl", hash = "sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61", size = 21230, upload-time = "2025-10-26T15:12:09.109Z" },
]
[[package]]
name = "pyyaml"
version = "6.0.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" },
{ url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" },
{ url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" },
{ url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" },
{ url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" },
{ url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" },
{ url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" },
{ url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" },
{ url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" },
{ url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" },
{ url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" },
{ url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" },
{ url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" },
{ url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" },
{ url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" },
{ url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" },
{ url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" },
{ url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" },
{ url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" },
{ url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" },
{ url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" },
{ url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" },
{ url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" },
{ url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" },
{ url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" },
{ url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" },
{ url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" },
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" },
]
[[package]]
name = "rich"
version = "14.3.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "markdown-it-py" },
{ name = "pygments" },
]
sdist = { url = "https://files.pythonhosted.org/packages/74/99/a4cab2acbb884f80e558b0771e97e21e939c5dfb460f488d19df485e8298/rich-14.3.2.tar.gz", hash = "sha256:e712f11c1a562a11843306f5ed999475f09ac31ffb64281f73ab29ffdda8b3b8", size = 230143, upload-time = "2026-02-01T16:20:47.908Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ef/45/615f5babd880b4bd7d405cc0dc348234c5ffb6ed1ea33e152ede08b2072d/rich-14.3.2-py3-none-any.whl", hash = "sha256:08e67c3e90884651da3239ea668222d19bea7b589149d8014a21c633420dbb69", size = 309963, upload-time = "2026-02-01T16:20:46.078Z" },
]
[[package]]
name = "ruff"
version = "0.15.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/04/dc/4e6ac71b511b141cf626357a3946679abeba4cf67bc7cc5a17920f31e10d/ruff-0.15.1.tar.gz", hash = "sha256:c590fe13fb57c97141ae975c03a1aedb3d3156030cabd740d6ff0b0d601e203f", size = 4540855, upload-time = "2026-02-12T23:09:09.998Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/23/bf/e6e4324238c17f9d9120a9d60aa99a7daaa21204c07fcd84e2ef03bb5fd1/ruff-0.15.1-py3-none-linux_armv6l.whl", hash = "sha256:b101ed7cf4615bda6ffe65bdb59f964e9f4a0d3f85cbf0e54f0ab76d7b90228a", size = 10367819, upload-time = "2026-02-12T23:09:03.598Z" },
{ url = "https://files.pythonhosted.org/packages/b3/ea/c8f89d32e7912269d38c58f3649e453ac32c528f93bb7f4219258be2e7ed/ruff-0.15.1-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:939c995e9277e63ea632cc8d3fae17aa758526f49a9a850d2e7e758bfef46602", size = 10798618, upload-time = "2026-02-12T23:09:22.928Z" },
{ url = "https://files.pythonhosted.org/packages/5e/0f/1d0d88bc862624247d82c20c10d4c0f6bb2f346559d8af281674cf327f15/ruff-0.15.1-py3-none-macosx_11_0_arm64.whl", hash = "sha256:1d83466455fdefe60b8d9c8df81d3c1bbb2115cede53549d3b522ce2bc703899", size = 10148518, upload-time = "2026-02-12T23:08:58.339Z" },
{ url = "https://files.pythonhosted.org/packages/f5/c8/291c49cefaa4a9248e986256df2ade7add79388fe179e0691be06fae6f37/ruff-0.15.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a9457e3c3291024866222b96108ab2d8265b477e5b1534c7ddb1810904858d16", size = 10518811, upload-time = "2026-02-12T23:09:31.865Z" },
{ url = "https://files.pythonhosted.org/packages/c3/1a/f5707440e5ae43ffa5365cac8bbb91e9665f4a883f560893829cf16a606b/ruff-0.15.1-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:92c92b003e9d4f7fbd33b1867bb15a1b785b1735069108dfc23821ba045b29bc", size = 10196169, upload-time = "2026-02-12T23:09:17.306Z" },
{ url = "https://files.pythonhosted.org/packages/2a/ff/26ddc8c4da04c8fd3ee65a89c9fb99eaa5c30394269d424461467be2271f/ruff-0.15.1-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1fe5c41ab43e3a06778844c586251eb5a510f67125427625f9eb2b9526535779", size = 10990491, upload-time = "2026-02-12T23:09:25.503Z" },
{ url = "https://files.pythonhosted.org/packages/fc/00/50920cb385b89413f7cdb4bb9bc8fc59c1b0f30028d8bccc294189a54955/ruff-0.15.1-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:66a6dd6df4d80dc382c6484f8ce1bcceb55c32e9f27a8b94c32f6c7331bf14fb", size = 11843280, upload-time = "2026-02-12T23:09:19.88Z" },
{ url = "https://files.pythonhosted.org/packages/5d/6d/2f5cad8380caf5632a15460c323ae326f1e1a2b5b90a6ee7519017a017ca/ruff-0.15.1-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6a4a42cbb8af0bda9bcd7606b064d7c0bc311a88d141d02f78920be6acb5aa83", size = 11274336, upload-time = "2026-02-12T23:09:14.907Z" },
{ url = "https://files.pythonhosted.org/packages/a3/1d/5f56cae1d6c40b8a318513599b35ea4b075d7dc1cd1d04449578c29d1d75/ruff-0.15.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4ab064052c31dddada35079901592dfba2e05f5b1e43af3954aafcbc1096a5b2", size = 11137288, upload-time = "2026-02-12T23:09:07.475Z" },
{ url = "https://files.pythonhosted.org/packages/cd/20/6f8d7d8f768c93b0382b33b9306b3b999918816da46537d5a61635514635/ruff-0.15.1-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:5631c940fe9fe91f817a4c2ea4e81f47bee3ca4aa646134a24374f3c19ad9454", size = 11070681, upload-time = "2026-02-12T23:08:55.43Z" },
{ url = "https://files.pythonhosted.org/packages/9a/67/d640ac76069f64cdea59dba02af2e00b1fa30e2103c7f8d049c0cff4cafd/ruff-0.15.1-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:68138a4ba184b4691ccdc39f7795c66b3c68160c586519e7e8444cf5a53e1b4c", size = 10486401, upload-time = "2026-02-12T23:09:27.927Z" },
{ url = "https://files.pythonhosted.org/packages/65/3d/e1429f64a3ff89297497916b88c32a5cc88eeca7e9c787072d0e7f1d3e1e/ruff-0.15.1-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:518f9af03bfc33c03bdb4cb63fabc935341bb7f54af500f92ac309ecfbba6330", size = 10197452, upload-time = "2026-02-12T23:09:12.147Z" },
{ url = "https://files.pythonhosted.org/packages/78/83/e2c3bade17dad63bf1e1c2ffaf11490603b760be149e1419b07049b36ef2/ruff-0.15.1-py3-none-musllinux_1_2_i686.whl", hash = "sha256:da79f4d6a826caaea95de0237a67e33b81e6ec2e25fc7e1993a4015dffca7c61", size = 10693900, upload-time = "2026-02-12T23:09:34.418Z" },
{ url = "https://files.pythonhosted.org/packages/a1/27/fdc0e11a813e6338e0706e8b39bb7a1d61ea5b36873b351acee7e524a72a/ruff-0.15.1-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:3dd86dccb83cd7d4dcfac303ffc277e6048600dfc22e38158afa208e8bf94a1f", size = 11227302, upload-time = "2026-02-12T23:09:36.536Z" },
{ url = "https://files.pythonhosted.org/packages/f6/58/ac864a75067dcbd3b95be5ab4eb2b601d7fbc3d3d736a27e391a4f92a5c1/ruff-0.15.1-py3-none-win32.whl", hash = "sha256:660975d9cb49b5d5278b12b03bb9951d554543a90b74ed5d366b20e2c57c2098", size = 10462555, upload-time = "2026-02-12T23:09:29.899Z" },
{ url = "https://files.pythonhosted.org/packages/e0/5e/d4ccc8a27ecdb78116feac4935dfc39d1304536f4296168f91ed3ec00cd2/ruff-0.15.1-py3-none-win_amd64.whl", hash = "sha256:c820fef9dd5d4172a6570e5721704a96c6679b80cf7be41659ed439653f62336", size = 11599956, upload-time = "2026-02-12T23:09:01.157Z" },
{ url = "https://files.pythonhosted.org/packages/2a/07/5bda6a85b220c64c65686bc85bd0bbb23b29c62b3a9f9433fa55f17cda93/ruff-0.15.1-py3-none-win_arm64.whl", hash = "sha256:5ff7d5f0f88567850f45081fac8f4ec212be8d0b963e385c3f7d0d2eb4899416", size = 10874604, upload-time = "2026-02-12T23:09:05.515Z" },
]
[[package]]
name = "shellingham"
version = "1.5.4"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/58/15/8b3609fd3830ef7b27b655beb4b4e9c62313a4e8da8c676e142cc210d58e/shellingham-1.5.4.tar.gz", hash = "sha256:8dbca0739d487e5bd35ab3ca4b36e11c4078f3a234bfce294b0a0291363404de", size = 10310, upload-time = "2023-10-24T04:13:40.426Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e0/f9/0595336914c5619e5f28a1fb793285925a8cd4b432c9da0a987836c7f822/shellingham-1.5.4-py2.py3-none-any.whl", hash = "sha256:7ecfff8f2fd72616f7481040475a65b2bf8af90a56c89140852d1120324e8686", size = 9755, upload-time = "2023-10-24T04:13:38.866Z" },
]
[[package]]
name = "streamd"
version = "0.1.0"
source = { editable = "." }
dependencies = [
{ name = "click" },
{ name = "mistletoe" },
{ name = "pydantic" },
{ name = "pydantic-settings", extra = ["yaml"] },
{ name = "rich" },
{ name = "typer" },
{ name = "xdg-base-dirs" },
]
[package.dev-dependencies]
dev = [
{ name = "basedpyright" },
{ name = "faker" },
{ name = "pytest" },
{ name = "ruff" },
]
[package.metadata]
requires-dist = [
{ name = "click", specifier = "==8.3.1" },
{ name = "mistletoe", specifier = "==1.5.1" },
{ name = "pydantic", specifier = "==2.12.5" },
{ name = "pydantic-settings", extras = ["yaml"], specifier = "==2.12.0" },
{ name = "rich", specifier = "==14.3.2" },
{ name = "typer", specifier = "==0.23.1" },
{ name = "xdg-base-dirs", specifier = "==6.0.2" },
]
[package.metadata.requires-dev]
dev = [
{ name = "basedpyright", specifier = "==1.38.0" },
{ name = "faker", specifier = "==40.4.0" },
{ name = "pytest", specifier = "==9.0.2" },
{ name = "ruff", specifier = "==0.15.1" },
]
[[package]]
name = "typer"
version = "0.23.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-doc" },
{ name = "click" },
{ name = "rich" },
{ name = "shellingham" },
]
sdist = { url = "https://files.pythonhosted.org/packages/fd/07/b822e1b307d40e263e8253d2384cf98c51aa2368cc7ba9a07e523a1d964b/typer-0.23.1.tar.gz", hash = "sha256:2070374e4d31c83e7b61362fd859aa683576432fd5b026b060ad6b4cd3b86134", size = 120047, upload-time = "2026-02-13T10:04:30.984Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d5/91/9b286ab899c008c2cb05e8be99814807e7fbbd33f0c0c960470826e5ac82/typer-0.23.1-py3-none-any.whl", hash = "sha256:3291ad0d3c701cbf522012faccfbb29352ff16ad262db2139e6b01f15781f14e", size = 56813, upload-time = "2026-02-13T10:04:32.008Z" },
]
[[package]]
name = "typing-extensions"
version = "4.15.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
]
[[package]]
name = "typing-inspection"
version = "0.4.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/55/e3/70399cb7dd41c10ac53367ae42139cf4b1ca5f36bb3dc6c9d33acdb43655/typing_inspection-0.4.2.tar.gz", hash = "sha256:ba561c48a67c5958007083d386c3295464928b01faa735ab8547c5692e87f464", size = 75949, upload-time = "2025-10-01T02:14:41.687Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/dc/9b/47798a6c91d8bdb567fe2698fe81e0c6b7cb7ef4d13da4114b41d239f65d/typing_inspection-0.4.2-py3-none-any.whl", hash = "sha256:4ed1cacbdc298c220f1bd249ed5287caa16f34d44ef4e9c3d0cbad5b521545e7", size = 14611, upload-time = "2025-10-01T02:14:40.154Z" },
]
[[package]]
name = "tzdata"
version = "2025.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/5e/a7/c202b344c5ca7daf398f3b8a477eeb205cf3b6f32e7ec3a6bac0629ca975/tzdata-2025.3.tar.gz", hash = "sha256:de39c2ca5dc7b0344f2eba86f49d614019d29f060fc4ebc8a417896a620b56a7", size = 196772, upload-time = "2025-12-13T17:45:35.667Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c7/b0/003792df09decd6849a5e39c28b513c06e84436a54440380862b5aeff25d/tzdata-2025.3-py2.py3-none-any.whl", hash = "sha256:06a47e5700f3081aab02b2e513160914ff0694bce9947d6b76ebd6bf57cfc5d1", size = 348521, upload-time = "2025-12-13T17:45:33.889Z" },
]
[[package]]
name = "xdg-base-dirs"
version = "6.0.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/bf/d0/bbe05a15347538aaf9fa5b51ac3b97075dfb834931fcb77d81fbdb69e8f6/xdg_base_dirs-6.0.2.tar.gz", hash = "sha256:950504e14d27cf3c9cb37744680a43bf0ac42efefc4ef4acf98dc736cab2bced", size = 4085, upload-time = "2024-10-19T14:35:08.114Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fc/03/030b47fd46b60fc87af548e57ff59c2ca84b2a1dadbe721bb0ce33896b2e/xdg_base_dirs-6.0.2-py3-none-any.whl", hash = "sha256:3c01d1b758ed4ace150ac960ac0bd13ce4542b9e2cdf01312dcda5012cfebabe", size = 4747, upload-time = "2024-10-19T14:35:05.931Z" },
]