Compare commits

...

217 Commits

Author SHA1 Message Date
FrederikBaerentsen f262411dc4 fix(purchase): treat 0 as valid price throughout and fix minifig import data types 2026-04-30 20:12:14 +02:00
FrederikBaerentsen 711833e5de fix(statistics): show price values with 2 decimal places instead of rounding to whole numbers (see #146) 2026-04-18 14:51:37 +02:00
FrederikBaerentsen 12dead4ded fix(admin): changed browser popup to bootstrap modal 2026-04-18 14:38:48 +02:00
FrederikBaerentsen 0e3ba26010 Merge branch 'bugfix/issue-151' into release/1.4.1 2026-04-18 13:56:53 +02:00
FrederikBaerentsen 6177187103 Merge branch 'bugfix/issue-150' into release/1.4.1 2026-04-18 13:55:49 +02:00
FrederikBaerentsen 5c0daed160 Merge branch 'bugfix/issue-149c' into release/1.4.1 2026-04-18 13:53:55 +02:00
FrederikBaerentsen 66bbed3597 Merge branch 'bugfix/issue-149b' into release/1.4.1 2026-04-18 13:51:13 +02:00
FrederikBaerentsen d3a014765b fix(add): purchase date, price, and notes are now saved when adding sets and minifigures 2026-04-18 11:31:09 +02:00
FrederikBaerentsen 665441c5ac Updated changelog 2026-04-18 10:27:35 +02:00
FrederikBaerentsen d751a3d0af fix(add): replace two-socket approach with single BrickSetSocket for minifigure error where duplicate sets were added (see #150) 2026-04-18 10:16:27 +02:00
FrederikBaerentsen 1b077e86b1 fix(import): handle empty image URLs from Rebrickable for minifigures and parts (see #149) 2026-04-17 19:14:52 +02:00
FrederikBaerentsen ef6bdc823d fix(admin): cast BK_INSTRUCTIONS_ALLOWED_EXTENSIONS as a list not a string (see #149) 2026-04-17 18:07:34 +02:00
FrederikBaerentsen 0567d9817f fix(admin): restore actual defaults on "Reset to Defaults" instead of blanking fields. (see #149) 2026-04-17 17:23:47 +02:00
FrederikBaerentsen fa9e0c3765 Updated changelog 2026-04-17 14:44:15 +02:00
FrederikBaerentsen 69318e7b0b fix(wishes): delete wish owners before wish to avoid FK constraint error 2026-04-17 14:42:40 +02:00
FrederikBaerentsen 9caeebd82e Merge pull request 'release/1.4' (#147) from release/1.4 into master
Reviewed-on: #147
2026-04-15 14:24:00 +02:00
FrederikBaerentsen 16f11a3465 Updated readme 2026-04-15 14:21:54 +02:00
FrederikBaerentsen 57e01f9fb4 Update readme and gitignore and dockerignore 2026-04-15 14:20:41 +02:00
FrederikBaerentsen 754b57f6f4 Updated dockerignore and gitignore 2026-04-15 09:59:25 +02:00
FrederikBaerentsen 82cd083294 Updated readme with paypal button 2026-04-14 20:54:24 +02:00
FrederikBaerentsen 79bc5243eb Updated readme with paypal button 2026-04-14 20:51:30 +02:00
FrederikBaerentsen 126fb1e5cb feat(add): set metadata on add — purchase, notes, statuses
Allow setting owners, purchase (date, price, location), notes,
statuses, storage, and tags directly from the add set form.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-09 20:23:47 +01:00
FrederikBaerentsen 2536cbe170 fix(docker): support non-root users with user: (#138) 2026-02-18 16:24:39 +01:00
FrederikBaerentsen c7d582c908 Updated changelog 2026-02-18 16:20:13 +01:00
FrederikBaerentsen 79e450ed28 Merge pull request 'Fix typo and adjust sentence grammar in database upgrade warning' (#139) from fatherfork/BrickTracker:release/1.4 into release/1.4
Reviewed-on: #139
2026-02-18 16:17:52 +01:00
fatherfork 7f2751de14 Merge branch 'release/1.4' into release/1.4 2026-02-16 02:33:29 +01:00
FrederikBaerentsen 3cfe55ea4a feat(tables): add sortable Checked column and quick-add toggle setting 2026-02-14 13:58:39 +01:00
FrederikBaerentsen 79fbc443ab Updated changelog 2026-02-14 13:55:21 +01:00
fatherfork 3130cedd40 Add word for sentence clarity 2026-02-14 03:43:31 +01:00
fatherfork b672143777 Fix typo
Correct "shold" to "should"
2026-02-14 03:40:28 +01:00
FrederikBaerentsen 2bd1979da2 fix(env): updated env vars to reflect new db structure 2026-02-10 19:06:59 +01:00
FrederikBaerentsen 7bc77b41bc fix(add): fixed lot mode adding for individual parts 2026-02-09 07:54:10 +01:00
FrederikBaerentsen 8234a79cc0 fix(tables): checked parts can now be sorted. Fixes #137 2026-02-06 13:21:51 +01:00
FrederikBaerentsen c8e9ce3fb4 fix(tables): checked parts can now be sorted. Fixes #137 2026-02-06 13:21:14 +01:00
FrederikBaerentsen 383f9899fb fix(tables): client-side sort buttons now sync with DataTables headers to prevent data corruption 2026-02-06 08:03:15 +01:00
FrederikBaerentsen 4aea50587e Updated changelog 2026-01-31 11:12:21 +01:00
FrederikBaerentsen a2883cafe9 fix(individual-parts): bulk add image download and storage deletion validation 2026-01-31 11:08:32 +01:00
FrederikBaerentsen beb09641b9 fix(docker): updated dockerignore 2026-01-31 11:07:44 +01:00
FrederikBaerentsen 4fdf59f9f7 fix(database): fixed front page database query 2026-01-20 13:38:37 +01:00
FrederikBaerentsen 41d34c2088 fix: export options, NOT filter js, status accordion 2026-01-20 13:07:48 +01:00
FrederikBaerentsen 6b42edb663 fix(admin): changed feedback for static env vars on save 2026-01-20 13:06:53 +01:00
FrederikBaerentsen 22aa5afe4b fix(minifigures): removed qty field on minifigure card 2026-01-20 13:06:23 +01:00
FrederikBaerentsen e71a379c0b fix(add): env vars now correctly hides the minifigure text on the add page 2026-01-20 13:06:04 +01:00
FrederikBaerentsen bea8c068b1 fix(css): fixed styling for NOT filter button 2026-01-20 13:05:21 +01:00
FrederikBaerentsen fbf330705f feat(individual): implement read-only mode for individual minifigures and parts 2026-01-20 07:38:06 +01:00
FrederikBaerentsen 4f1997305f feat(minifigures): lock edit if BK_DISABLE_INDIVIDUAL_MINIFIGURES=true 2026-01-19 20:33:18 +01:00
FrederikBaerentsen c944575fbd Release 1.4: Massive update with new individual parts and minifigure management. Documentation updated. Commits for the feature: 05d98b3, 77be333, be3ac28, fa05305, dda171c, 24c8f1e, caaef97, e46e1d5, 93ef88b, b9ae977, 202e924, c947d29, 58ff39f, 6ba28ea 2026-01-19 18:03:37 +01:00
FrederikBaerentsen 05d98b3847 feat(frontend): add socket support and styling for individual items 2026-01-19 17:24:57 +01:00
FrederikBaerentsen 77be333bb2 feat(views): integrate individual items into existing views 2026-01-19 17:24:13 +01:00
FrederikBaerentsen be3ac284f4 feat(sql): update queries to support individual items and fix schema drop order 2026-01-19 17:23:01 +01:00
FrederikBaerentsen fa053055a3 feat(views): update existing models to support individual items integration 2026-01-19 17:19:21 +01:00
FrederikBaerentsen dda171c027 feat(metadata): extend metadata system to support individual minifigures and parts 2026-01-18 21:07:39 +01:00
FrederikBaerentsen 24c8f1e5df feat(core): integrate individual items features with app configuration and versioning 2026-01-18 21:06:32 +01:00
FrederikBaerentsen caaef97313 feat(ui): add templates and scripts for individual items and bulk part addition 2026-01-18 20:38:11 +01:00
FrederikBaerentsen e46e1d5f93 feat(views): add routes for individual minifigures, parts, and purchase locations 2026-01-18 20:34:05 +01:00
FrederikBaerentsen 93ef88b760 feat(parts): add core models for individual parts and lot management 2026-01-18 20:31:53 +01:00
FrederikBaerentsen b9ae97792d feat(minifigures): add core models for individual minifigure tracking 2026-01-18 20:28:13 +01:00
FrederikBaerentsen 202e924848 feat(purchases): add purchase location tracking queries 2026-01-18 20:26:05 +01:00
FrederikBaerentsen c947d29d67 feat(parts): add SQL queries for individual parts and lot management 2026-01-18 20:25:47 +01:00
FrederikBaerentsen 58ff39fbc3 feat(minifigures): add SQL queries for individual minifigures tracking 2026-01-18 20:24:31 +01:00
FrederikBaerentsen 6ba28ea521 feat(database): add individual minifigures and parts schema with migrations 2026-01-18 20:23:00 +01:00
FrederikBaerentsen c40da16d9e fix(socket): testing change to socket for reverse proxy 2026-01-01 10:49:39 -05:00
FrederikBaerentsen d885f3aa11 fix(sets): show note in the grid or details view with new env var 2025-12-30 11:25:42 -05:00
FrederikBaerentsen a72cb67c8c fix(add): fixed #129, so bulk add can have trailing comma and not fail 2025-12-29 09:42:48 -05:00
FrederikBaerentsen 423540bba4 Update version to 1.4.0 2025-12-26 18:05:25 -05:00
FrederikBaerentsen a915a0001f fix(sets): fixed an issue with refresh, where parts didn't get updated correctly 2025-12-26 12:08:50 -05:00
FrederikBaerentsen 2c961f9a78 Updated changelog 2025-12-25 20:49:40 -05:00
FrederikBaerentsen 3e1e846a99 feat(admin): added bulk refresh 2025-12-25 20:46:54 -05:00
FrederikBaerentsen 5725872060 feat(sets): added notes field to sets. Displayed both at top of page, if not empty and in the metadata section, where it can be changed 2025-12-25 20:44:40 -05:00
FrederikBaerentsen dcf9496db9 Updated changelog 2025-12-25 15:20:14 -05:00
FrederikBaerentsen 19e3d8afe6 fix(filter): changed equal and not equal icon to text character to avoid weird resizing 2025-12-25 15:19:47 -05:00
FrederikBaerentsen f54dd3ec73 feat(filter): added not equal to filter on sets page, so it is possible to filter for not-tags, not-status, not-year etc 2025-12-25 14:35:59 -05:00
FrederikBaerentsen 4336ad4de3 fix(sets): cleaup of code for set refresh 2025-12-24 08:28:36 -05:00
FrederikBaerentsen 5418aca8f0 fix(sets): refreshing sets should work now 2025-12-23 23:08:30 -05:00
FrederikBaerentsen 9518b0261c feat(frontpage): added parts row 2025-12-21 21:45:22 -05:00
FrederikBaerentsen 9bd80c1352 feat(frontpage): added parts row 2025-12-21 21:45:11 -05:00
FrederikBaerentsen 2f1bba475d feat(admin): added options to order badges on sets and details page. 2025-12-21 20:52:02 -05:00
FrederikBaerentsen b30deef529 fix(add): fix FK constraint errors when importing sets with metadata 2025-12-21 18:05:09 -05:00
FrederikBaerentsen c20231f654 Merge branch 'release/1.3.2' into release/1.4 2025-12-20 18:08:04 -05:00
FrederikBaerentsen d783b8fbc9 feat(db): added integrity check and cleanup of database to admin page 2025-12-20 17:55:37 -05:00
FrederikBaerentsen 146f3706a5 Update version to 1.3.1 2025-12-20 15:42:47 -05:00
FrederikBaerentsen 951e662113 fix(changelog): updated changelog 2025-12-20 15:24:57 -05:00
FrederikBaerentsen 1184f9bf48 fix(add): fixed #199, foreign key constraint failed 2025-12-20 15:22:45 -05:00
FrederikBaerentsen 6044841329 Updated changelog 2025-12-19 22:43:38 -05:00
FrederikBaerentsen 136f7d03f5 feat(admin): first version of export feature. 2025-12-19 22:41:28 -05:00
FrederikBaerentsen ede8d996e2 fix(debug): fixed debug log not shown 2025-12-19 18:16:08 -05:00
FrederikBaerentsen 45f74848d2 fix(changelog): updated with post-1.3 fixes 2025-12-18 22:45:46 -05:00
FrederikBaerentsen 417bbd178b fix(meta): fixed an issue where owner, status and tag didn't save on sets detail page 2025-12-18 22:16:14 -05:00
FrederikBaerentsen 349648969c fix(minifigures): fix filter on client side pagination 2025-12-18 21:53:19 -05:00
FrederikBaerentsen 7f9a7a2afe fix(error): fixed error message paths 2025-12-18 13:44:53 -05:00
FrederikBaerentsen 451b8e14a1 fix(admin): nil images now uses correct folder. 2025-12-18 13:20:26 -05:00
FrederikBaerentsen cca5b6d88e fix(readme): updated readme 2025-12-18 11:35:02 -05:00
FrederikBaerentsen 678499a9f2 Merge pull request '.gitignore update' (#117) from release/1.3 into master
Reviewed-on: #117
2025-12-18 17:25:47 +01:00
FrederikBaerentsen 8fab57d55a .gitignore update 2025-12-18 10:17:07 -05:00
FrederikBaerentsen b1c32ea5aa Merge pull request 'release/1.3' (#116) from release/1.3 into master
Reviewed-on: #116
2025-12-18 01:41:28 +01:00
FrederikBaerentsen 577f9a566d feat(migration): added documentation links to migration page 2025-12-17 19:33:34 -05:00
FrederikBaerentsen 1263f775c3 feat(readme): updated readme with logo 2025-12-17 19:22:53 -05:00
FrederikBaerentsen 3f95f49e31 feat(readme): updated readme with links to new documentation 2025-12-17 19:19:28 -05:00
FrederikBaerentsen d134974b84 feat(logo): Image updated to own design 2025-12-17 17:56:51 -05:00
FrederikBaerentsen 728b030ee1 feat(docs): new images for documentation 2025-12-17 14:00:36 -05:00
FrederikBaerentsen bcbeff8a3c fix(sets): filters now uses two rows on sets page 2025-12-17 13:07:54 -05:00
FrederikBaerentsen 01a5114bb0 fix(admin): fixed link to migration guide 2025-12-17 10:27:13 -05:00
FrederikBaerentsen 6003419069 fix(git): updated gitignore 2025-12-17 10:26:29 -05:00
FrederikBaerentsen e32b82b961 feat(env): updated examples 2025-12-17 10:26:10 -05:00
FrederikBaerentsen c45d696a48 fix(docs): updated migration guide with backup warning. 2025-12-15 22:27:13 -05:00
FrederikBaerentsen a98f4faaeb fix(docker): updated compose files for v1.3 changes 2025-12-15 22:26:45 -05:00
FrederikBaerentsen 343f2f2fe9 fix(changelog): updated changelog formatting 2025-12-15 22:15:05 -05:00
FrederikBaerentsen 41b5f60e0a fix(changelog): updated changelog for 1.3 2025-12-15 22:11:40 -05:00
FrederikBaerentsen 41aed75b37 fix(docs): updated migration guide 2025-12-15 21:32:36 -05:00
FrederikBaerentsen 7651ac187d fix(env): create folder if doesn't exist, when saving .env file 2025-12-15 21:20:56 -05:00
FrederikBaerentsen 7cc8de596e fix(docker): changes Dockerfile command order to use pip cache 2025-12-15 19:53:00 -05:00
FrederikBaerentsen d207f22990 fix(docker): fixing exit code 137 when stopping container 2025-12-15 19:50:29 -05:00
FrederikBaerentsen 2cc23b5ffa feat(darkmode): updated changelog with darkmode info 2025-12-15 19:34:43 -05:00
FrederikBaerentsen b2e4597ab5 feat(darkmode): added darkmode with env var setting and live settings on admin page 2025-12-15 17:52:05 -05:00
FrederikBaerentsen 7369d0babf feat(parts): Added option to hide spare parts but still save them to db 2025-12-07 20:41:13 +01:00
FrederikBaerentsen d6d0a70116 fix(socket): added better debug logging and added polling as priority over websocket, for better iOS connection 2025-12-07 20:38:37 +01:00
FrederikBaerentsen 91ef4158b7 fix(env): settings are not locked after save anymore 2025-12-06 21:04:04 +01:00
FrederikBaerentsen e1eea7295d fix(env): moved .env to data folder. admin page, now correctly works with changes to variables 2025-12-06 20:48:30 +01:00
FrederikBaerentsen bc8864ab2a fix(inst): removed cloudscraper 2025-12-06 15:41:05 +01:00
FrederikBaerentsen 7860b71ccd fix(sets): adding sets now works after migration 20 2025-12-06 15:40:44 +01:00
FrederikBaerentsen 60e4fe8037 fix(inst): removed cloudscraper as it caused issues with rebrickable instructions 2025-12-05 23:51:09 +01:00
FrederikBaerentsen 85728e2d68 fix(inst): fixed folder path for instructions 2025-12-05 22:38:38 +01:00
FrederikBaerentsen 00ca611217 fix(inst): download from rebrickable work again. fixed folder path and rebrickable connection 2025-12-05 22:31:19 +01:00
FrederikBaerentsen 1e17185114 fix(sets): if no set image exists, use nil image 2025-12-05 20:51:41 +01:00
FrederikBaerentsen 41e61a2f41 fix(sets): set number can now be alphanumerical 2025-12-05 19:33:25 +01:00
FrederikBaerentsen 4d4a1aa9f9 feat: new user data structure. see docs/migration_guide 2025-12-05 17:59:56 +01:00
FrederikBaerentsen 29c5d81160 feat(stats): statistics page now requires authentication if enabled 2025-11-06 21:57:05 +01:00
FrederikBaerentsen 891a55ee9e fix: moved clear filter button 2025-11-06 21:39:25 +01:00
FrederikBaerentsen 0fedd430b3 fix: removed ?page=1 on client-side pagination 2025-11-06 21:08:16 +01:00
FrederikBaerentsen 346f8e9908 feat: added clear filter button to sets/parts/problems/minifigures 2025-11-06 18:53:19 +01:00
FrederikBaerentsen 7567cb51af feat(prob): added filter for tag and storage 2025-11-06 18:06:27 +01:00
FrederikBaerentsen 61450312ff feat: added filters to /parts, /problems, /minifigures 2025-11-06 17:51:43 +01:00
FrederikBaerentsen 22cdb713d7 fix(admin): changed accordion style on settings 2025-11-06 09:16:12 +01:00
FrederikBaerentsen 81b7ebf1a6 fix(sql): set will now be deleted correctly 2025-11-06 09:08:53 +01:00
FrederikBaerentsen 7445666f25 fix(statistics): statistics will now load correctly if no sets are found 2025-11-06 09:08:36 +01:00
FrederikBaerentsen e65a9454a8 Updated gitignore 2025-11-06 08:29:32 +01:00
FrederikBaerentsen 8053f5d30c feat(sets): show bricklink if enabled 2025-10-03 10:16:56 +02:00
FrederikBaerentsen 7eb199d289 fix(env): changed default minifigures folder from minifigs to minifigures (#92) 2025-10-03 09:50:41 +02:00
FrederikBaerentsen 6364da676b fix(admin): added log into to respect debug var 2025-10-03 09:22:45 +02:00
FrederikBaerentsen a3d08d8cf6 feat(sets): added filter on sets page to show duplicate sets. default is shown. can be hidden using env var. works with consolidated sets too. 2025-10-03 09:13:15 +02:00
FrederikBaerentsen 4b653ac270 feat(admin): added live configuration management, where user can enable/disable and change configurations without editing .env file. Some changes will need an application restart 2025-10-03 00:15:21 +02:00
FrederikBaerentsen a70a1660f0 fix(admin): open the right drawer on database upgrade 2025-10-02 23:52:13 +02:00
FrederikBaerentsen 0db749fce0 doc(changelog): updated changelog. 2025-10-02 14:58:23 +02:00
FrederikBaerentsen 256108bbdb feat(sql): WAL and index optimization 2025-10-02 14:53:58 +02:00
FrederikBaerentsen 145d9d5dcb feat(admin): database is expanded by default 2025-10-02 14:35:37 +02:00
FrederikBaerentsen b9d42c2866 feat(admin): new env var. for which sections should be open by default in the admin page. 2025-10-02 14:27:32 +02:00
FrederikBaerentsen d1988d015e fix(sets): year-filter now correctly show all years not just current page. 2025-10-02 14:02:51 +02:00
FrederikBaerentsen 8e458b01d1 Merge pull request 'feature/statistics' (#107) from feature/statistics into release/1.3
Reviewed-on: #107
2025-10-02 13:36:31 +02:00
FrederikBaerentsen 989e0d57d0 Fixed date formatting on consolidated sets 2025-10-01 21:17:44 +02:00
FrederikBaerentsen 1097255dca Fixed consolidated price on card 2025-10-01 21:11:14 +02:00
FrederikBaerentsen 7ffbc41f0a Updated changelog 2025-10-01 21:02:58 +02:00
FrederikBaerentsen 11f9e5782f Added charts, env var for charts, fixed formatting and table columns 2025-10-01 20:52:29 +02:00
FrederikBaerentsen 5f43e979f9 feat(statistics): Initial upload 2025-10-01 19:43:25 +02:00
FrederikBaerentsen 4375f018a4 Merge pull request 'feature/consolidation' (#106) from feature/consolidation into release/1.3
Reviewed-on: #106
2025-10-01 19:28:49 +02:00
FrederikBaerentsen 87472039be Changed border color 2025-10-01 19:22:57 +02:00
FrederikBaerentsen c1089c349f Fixed total minifigures for consolidated sets 2025-09-28 08:59:10 +02:00
FrederikBaerentsen 3f6af51a43 Changed the look of consolidated cards when multiple statuses are used. 2025-09-28 08:42:33 +02:00
FrederikBaerentsen bc3cc176ef Fixed purchase information on consolidated cards 2025-09-27 23:43:27 +02:00
FrederikBaerentsen 4a1a265fa8 Updated changelog 2025-09-27 23:32:45 +02:00
FrederikBaerentsen 7c95583345 Changed the "Multiple Copies Available" view and fixed border formatting. 2025-09-27 23:30:13 +02:00
FrederikBaerentsen 65f23c1f12 Fixed nested box formatting. 2025-09-27 23:06:53 +02:00
FrederikBaerentsen aa6c969a6b Fixed consolidating sets. 2025-09-27 23:06:06 +02:00
FrederikBaerentsen 0bff20215c Merge pull request 'feature/checkbox' (#105) from feature/checkbox into release/1.3
Reviewed-on: #105
2025-09-27 16:26:04 +02:00
FrederikBaerentsen d0147b8061 Incremented version to 1.3.0 2025-09-27 16:17:05 +02:00
FrederikBaerentsen ca0de215ab Fixed damaged parts drawer showing on minifigures when no parts are damaged. 2025-09-26 12:46:31 +02:00
FrederikBaerentsen 05b259e494 Removed checkboxes from minifigures details page 2025-09-26 12:28:49 +02:00
FrederikBaerentsen f03fd82be1 Feat(checkbox): Initial upload 2025-09-26 11:47:15 +02:00
FrederikBaerentsen a769e5464b Merge pull request 'feature/peeron' (#104) from feature/peeron into release/1.3
Reviewed-on: #104
2025-09-26 11:40:01 +02:00
FrederikBaerentsen 40871a1c10 Changed download string 2025-09-26 11:37:49 +02:00
FrederikBaerentsen caac283905 Updated peeron download logic with proper socket. 2025-09-26 11:31:22 +02:00
FrederikBaerentsen 4bc0ef5cc4 Peeron thumbnails cache, as peeron uses http and cant live link to https 2025-09-25 22:09:36 +02:00
FrederikBaerentsen ec4f44a3ab Removed unused import 2025-09-25 21:46:58 +02:00
FrederikBaerentsen 0a29543939 Cleanup of peeron download 2025-09-25 21:42:15 +02:00
FrederikBaerentsen 74fe14f09b Added rotation, moved select all, added link after download 2025-09-25 20:47:41 +02:00
FrederikBaerentsen 787624c432 Added env variables and fixed socket for peeron 2025-09-24 21:59:10 +02:00
FrederikBaerentsen eddf4311d3 Feat(peeron): Initial upload 2025-09-24 21:59:10 +02:00
FrederikBaerentsen 90c0c20d75 Merge pull request 'feature/pagination' (#101) from feature/pagination into release/1.3
Reviewed-on: #101
2025-09-24 21:49:05 +02:00
FrederikBaerentsen d2d388b142 Merge branch 'release/1.3' into feature/pagination 2025-09-24 21:47:54 +02:00
FrederikBaerentsen acf06e1955 Updated change log 2025-09-24 21:36:40 +02:00
FrederikBaerentsen c465e9814c Fixed duplicate color in parts dropdown 2025-09-24 21:24:51 +02:00
FrederikBaerentsen 046493294f Moved sort/filter buttons 2025-09-24 20:44:50 +02:00
FrederikBaerentsen 1096fbdef6 Fixed sorting icon on sets page 2025-09-24 20:40:46 +02:00
FrederikBaerentsen fc405e0832 Consolidated parts.js, problems.js and minifigures.js 2025-09-24 20:18:30 +02:00
FrederikBaerentsen cce96af09b Consolidate duplicate collapsible state management 2025-09-24 19:53:01 +02:00
FrederikBaerentsen f953a44593 Disabled table sort using headers, if server-side pagination is enabled. 2025-09-24 19:08:34 +02:00
FrederikBaerentsen e87cb90e20 Updated gitignore 2025-09-23 18:07:42 +02:00
FrederikBaerentsen f3fada9dd8 Updated gitignore 2025-09-23 17:58:15 +02:00
FrederikBaerentsen 4eae6b19dc Updated gitignore 2025-09-23 17:55:26 +02:00
FrederikBaerentsen 064b79bf9e Merge remote-tracking branch 'origin/master' into feature/pagination 2025-09-23 17:24:58 +02:00
FrederikBaerentsen 7c1cb66f67 Merge pull request 'hotfix/pagination-bug' (#99) from hotfix/pagination-bug into master
Reviewed-on: #99
2025-09-23 17:16:30 +02:00
FrederikBaerentsen 5641b3e740 Merge branch 'master' into hotfix/pagination-bug 2025-09-23 17:12:37 +02:00
FrederikBaerentsen 9317a1baae Removed code for another feature 2025-09-23 17:10:13 +02:00
FrederikBaerentsen 6f6d90aa60 fix(pagination): Fixed socket gevent (#95) 2025-09-23 17:06:18 +02:00
FrederikBaerentsen 83a45795c3 Merge pull request 'fix(pagination): Fix #95. Switch from eventlet to gevent' (#98) from hotfix/pagination-bug into master
Reviewed-on: #98
2025-09-23 16:51:27 +02:00
FrederikBaerentsen 572c52dada fix(pagination): Added requirements.txt 2025-09-23 16:46:43 +02:00
FrederikBaerentsen 909655c10a fix(pagination): Fix #95. Switch from eventlet to gevent 2025-09-23 16:42:03 +02:00
FrederikBaerentsen d1b79de411 Updated .env.sample with new variables 2025-09-23 16:41:38 +02:00
FrederikBaerentsen 1e767537b9 fix(pagination): Fix #95. Switch from eventlet to gevent 2025-09-23 16:36:22 +02:00
FrederikBaerentsen 8ee0d144be Updated gitignore 2025-09-23 15:16:51 +02:00
FrederikBaerentsen f7963b4723 Removed datatable-search field from minifigures page 2025-09-22 10:08:41 +02:00
FrederikBaerentsen 52b6c94483 Fixed problems pagination 2025-09-22 10:01:16 +02:00
FrederikBaerentsen b5236fae51 Added filter/search/pagination to 'Problems' 2025-09-22 09:36:25 +02:00
FrederikBaerentsen 9d0a48ee2a Fixed gitignore 2025-09-21 19:03:08 +02:00
FrederikBaerentsen 5677d731e4 Updated gitignore 2025-09-21 18:56:56 +02:00
FrederikBaerentsen fcdcd12184 Updated .env sample file 2025-09-21 18:21:29 +02:00
FrederikBaerentsen e1891e8bd6 Added more pagination options 2025-09-21 18:18:26 +02:00
FrederikBaerentsen af53b29818 Removed print log spam 2025-09-21 17:32:11 +02:00
FrederikBaerentsen 8a0a7837dc Fixed filtering on /sets page. 2025-09-21 17:26:57 +02:00
FrederikBaerentsen 4b3aef577a Fixed sorting and filtering on /sets. 2025-09-21 15:58:32 +02:00
FrederikBaerentsen 9a32a3f193 Merge remote-tracking branch 'origin/master' into feature/pagination 2025-09-17 18:32:53 +02:00
FrederikBaerentsen c71667cd41 Fix: #80, default images not downloading (also present in feature/pagination) 2025-09-17 18:07:28 +02:00
FrederikBaerentsen 421d635dd3 Moved import and added ignore to BeautifulSoup type annotation issues 2025-09-17 17:03:24 +02:00
FrederikBaerentsen 6bc406b70d Fixed broken wishlist page 2025-09-17 16:49:53 +02:00
FrederikBaerentsen 5fa145a9d7 Fixed pagination button size 2025-09-17 16:34:29 +02:00
FrederikBaerentsen 3bfd1c17dd Sets, Parts and Minifigures have pagination now 2025-09-17 07:06:34 +02:00
FrederikBaerentsen 46dada312a Added page size option 2025-09-16 18:26:21 +02:00
FrederikBaerentsen c876e1e3a4 Added pagination to /parts page. 2025-09-16 15:30:54 +02:00
313 changed files with 23748 additions and 1048 deletions
+8 -5
View File
@@ -8,11 +8,17 @@ static/sets
Dockerfile
compose.yaml
# Local data directories
local/
offline/
data/
# Documentation
docs/
LICENSE
*.md
*.sample
.code-workspace
# Temporary
*.csv
@@ -26,11 +32,8 @@ LICENSE
**/__pycache__
*.pyc
# Git
.git
# IDE
.vscode
# Hidden directories
.?*
# Dev
test-server.sh
+249 -51
View File
@@ -1,3 +1,23 @@
# ================================================================================================
# BrickTracker Configuration File
# ================================================================================================
#
# FILE LOCATION (v1.3+):
# ----------------------
# This file can be placed in two locations:
# 1. data/.env (RECOMMENDED) - Included in data volume backup, settings persist via admin panel
# 2. .env (root) - Backward compatible
#
# Priority: data/.env > .env (root)
#
# The application automatically detects and uses the correct location at runtime.
#
# For Docker:
# - Recommended: Place this file as data/.env (included in data volume)
# - Backward compatible: Keep as .env in root (add "env_file: .env" to compose.yaml)
#
# ================================================================================================
#
# Note on *_DEFAULT_ORDER
# If set, it will append a direct ORDER BY <whatever you set> to the SQL query
# while listing objects. You can look at the structure of the SQLite database to
@@ -32,15 +52,20 @@
# Default: https://www.bricklink.com/v2/catalog/catalogitem.page?P={part}&C={color}
# BK_BRICKLINK_LINK_PART_PATTERN=
# Optional: Pattern of the link to Bricklink for a set. Will be passed to Python .format()
# Supports {set_num} parameter. Set numbers in format like '10255-1' are used.
# Default: https://www.bricklink.com/v2/catalog/catalogitem.page?S={set_num}
# BK_BRICKLINK_LINK_SET_PATTERN=
# Optional: Display Bricklink links wherever applicable
# Default: false
# BK_BRICKLINK_LINKS=true
# Optional: Path to the database.
# Optional: Path to the database, relative to '/app/' folder
# Useful if you need it mounted in a Docker volume. Keep in mind that it will not
# do any check on the existence of the path, or if it is dangerous.
# Default: ./app.db
# BK_DATABASE_PATH=/var/lib/bricktracker/app.db
# Default: data/app.db
# BK_DATABASE_PATH=data/app.db
# Optional: Format of the timestamp added to the database file when downloading it
# Check https://docs.python.org/3/library/time.html#time.strftime for format details
@@ -81,9 +106,9 @@
# Default: .pdf
# BK_INSTRUCTIONS_ALLOWED_EXTENSIONS=.pdf, .docx, .png
# Optional: Folder where to store the instructions, relative to the '/app/static/' folder
# Default: instructions
# BK_INSTRUCTIONS_FOLDER=/var/lib/bricktracker/instructions/
# Optional: Folder where to store the instructions, relative to '/app/' folder
# Default: data/instructions
# BK_INSTRUCTIONS_FOLDER=data/instructions
# Optional: Hide the 'Add' entry from the menu. Does not disable the route.
# Default: false
@@ -97,6 +122,14 @@
# Default: false
# BK_HIDE_ADMIN=true
# Optional: Admin sections to expand by default (comma-separated list)
# Valid sections: authentication, instructions, image, theme, retired, metadata, owner, purchase_location, status, storage, tag, database
# Default: database (maintains original behavior with database section expanded)
# Examples:
# BK_ADMIN_DEFAULT_EXPANDED_SECTIONS=database,theme
# BK_ADMIN_DEFAULT_EXPANDED_SECTIONS=instructions,metadata
# BK_ADMIN_DEFAULT_EXPANDED_SECTIONS= (all sections collapsed)
# Optional: Hide the 'Instructions' entry from the menu. Does not disable the route.
# Default: false
# BK_HIDE_ALL_INSTRUCTIONS=true
@@ -122,6 +155,10 @@
# Default: false
# BK_HIDE_ALL_STORAGES=true
# Optional: Hide the 'Statistics' entry from the menu. Does not disable the route.
# Default: false
# BK_HIDE_STATISTICS=true
# Optional: Hide the 'Instructions' entry in a Set card
# Default: false
# BK_HIDE_SET_INSTRUCTIONS=true
@@ -134,21 +171,46 @@
# Default: false
# BK_HIDE_TABLE_MISSING_PARTS=true
# Optional: Hide the 'Checked' column from the parts table.
# Default: false
# BK_HIDE_TABLE_CHECKED_PARTS=true
# Optional: Hide the 'Wishlist' entry from the menu. Does not disable the route.
# Default: false
# BK_HIDE_WISHES=true
# Optional: Hide the 'Individual Minifigures' entry from the menu. Does not disable the route.
# Default: false
# BK_HIDE_INDIVIDUAL_MINIFIGURES=true
# Optional: Hide the 'Individual Parts' entry from the menu. Does not disable the route.
# Default: false
# BK_HIDE_INDIVIDUAL_PARTS=true
# Optional: Hide the 'Add to individual parts' quick-add buttons in parts tables.
# The column header with menu options (mark all missing, check all, etc.) remains visible.
# Default: false
# BK_HIDE_QUICK_ADD_INDIVIDUAL_PARTS=true
# Optional: Change the default order of minifigures. By default ordered by insertion order.
# Useful column names for this option are:
# - "rebrickable_minifigures"."figure": minifigure ID (fig-xxxxx)
# - "rebrickable_minifigures"."number": minifigure ID as an integer (xxxxx)
# - "rebrickable_minifigures"."figure": minifigure ID (e.g., "fig-001234")
# - "rebrickable_minifigures"."number": minifigure ID as an integer (e.g., 1234)
# - "rebrickable_minifigures"."name": minifigure name
# - "rebrickable_minifigures"."number_of_parts": number of parts in the minifigure
# - "bricktracker_minifigures"."quantity": quantity owned
# - "total_missing": number of missing parts (composite field)
# - "total_damaged": number of damaged parts (composite field)
# - "total_quantity": total quantity across all sets (composite field)
# - "total_sets": number of sets containing this minifigure (composite field)
# Default: "rebrickable_minifigures"."name" ASC
# BK_MINIFIGURES_DEFAULT_ORDER="rebrickable_minifigures"."name" ASC
# Examples:
# BK_MINIFIGURES_DEFAULT_ORDER="rebrickable_minifigures"."number" DESC
# BK_MINIFIGURES_DEFAULT_ORDER="total_missing" DESC, "rebrickable_minifigures"."name" ASC
# Optional: Folder where to store the minifigures images, relative to the '/app/static/' folder
# Default: minifigs
# BK_MINIFIGURES_FOLDER=minifigures
# Optional: Folder where to store the minifigures images, relative to '/app/' folder
# Default: data/minifigures
# BK_MINIFIGURES_FOLDER=data/minifigures
# Optional: Disable threading on the task executed by the socket.
# You should not need to change this parameter unless you are debugging something with the
@@ -158,17 +220,67 @@
# Optional: Change the default order of parts. By default ordered by insertion order.
# Useful column names for this option are:
# - "bricktracker_parts"."part": part number
# - "bricktracker_parts"."spare": part is a spare part
# - "combined"."part": part number (e.g., "3001")
# - "combined"."spare": part is a spare part (0 or 1)
# - "combined"."quantity": quantity of this part
# - "combined"."missing": number of missing parts
# - "combined"."damaged": number of damaged parts
# - "rebrickable_parts"."name": part name
# - "rebrickable_parts"."color_name": part color name
# - "total_missing": number of missing parts
# Default: "rebrickable_parts"."name" ASC, "rebrickable_parts"."color_name" ASC, "bricktracker_parts"."spare" ASC
# BK_PARTS_DEFAULT_ORDER="total_missing" DESC, "rebrickable_parts"."name"."name" ASC
# - "total_missing": total missing across all sets (composite field)
# - "total_damaged": total damaged across all sets (composite field)
# - "total_quantity": total quantity across all sets (composite field)
# - "total_sets": number of sets containing this part (composite field)
# - "total_minifigures": number of minifigures with this part (composite field)
# Default: "rebrickable_parts"."name" ASC, "rebrickable_parts"."color_name" ASC, "combined"."spare" ASC
# Examples:
# BK_PARTS_DEFAULT_ORDER="total_missing" DESC, "rebrickable_parts"."name" ASC
# BK_PARTS_DEFAULT_ORDER="rebrickable_parts"."color_name" ASC, "rebrickable_parts"."name" ASC
# Optional: Folder where to store the parts images, relative to the '/app/static/' folder
# Default: parts
# BK_PARTS_FOLDER=parts
# Optional: Folder where to store the parts images, relative to '/app/' folder
# Default: data/parts
# BK_PARTS_FOLDER=data/parts
# Optional: Enable server-side pagination for individual pages (recommended for large collections)
# When enabled, pages use server-side pagination with configurable page sizes
# When disabled, pages load all data at once with instant client-side search
# Default: false for all
# BK_SETS_SERVER_SIDE_PAGINATION=true
# BK_PARTS_SERVER_SIDE_PAGINATION=true
# BK_MINIFIGURES_SERVER_SIDE_PAGINATION=true
# BK_PROBLEMS_SERVER_SIDE_PAGINATION=true
# Optional: Number of parts to show per page on desktop devices (when server-side pagination is enabled)
# Default: 10
# BK_PARTS_PAGINATION_SIZE_DESKTOP=10
# Optional: Number of parts to show per page on mobile devices (when server-side pagination is enabled)
# Default: 5
# BK_PARTS_PAGINATION_SIZE_MOBILE=5
# Optional: Number of sets to show per page on desktop devices (when server-side pagination is enabled)
# Should be divisible by 4 for grid layout. Default: 12
# BK_SETS_PAGINATION_SIZE_DESKTOP=12
# Optional: Number of sets to show per page on mobile devices (when server-side pagination is enabled)
# Default: 4
# BK_SETS_PAGINATION_SIZE_MOBILE=4
# Optional: Number of minifigures to show per page on desktop devices (when server-side pagination is enabled)
# Default: 10
# BK_MINIFIGURES_PAGINATION_SIZE_DESKTOP=10
# Optional: Number of minifigures to show per page on mobile devices (when server-side pagination is enabled)
# Default: 5
# BK_MINIFIGURES_PAGINATION_SIZE_MOBILE=5
# Optional: Number of problems to show per page on desktop devices (when server-side pagination is enabled)
# Default: 10
# BK_PROBLEMS_PAGINATION_SIZE_DESKTOP=10
# Optional: Number of problems to show per page on mobile devices (when server-side pagination is enabled)
# Default: 5
# BK_PROBLEMS_PAGINATION_SIZE_MOBILE=5
# Optional: Port the server will listen on.
# Default: 3333
@@ -185,9 +297,12 @@
# Optional: Change the default order of purchase locations. By default ordered by insertion order.
# Useful column names for this option are:
# - "bricktracker_metadata_purchase_locations"."name" ASC: storage name
# - "bricktracker_metadata_purchase_locations"."name": purchase location name
# - "bricktracker_metadata_purchase_locations"."rowid": insertion order (special column)
# Default: "bricktracker_metadata_purchase_locations"."name" ASC
# BK_PURCHASE_LOCATION_DEFAULT_ORDER="bricktracker_metadata_purchase_locations"."name" ASC
# Examples:
# BK_PURCHASE_LOCATION_DEFAULT_ORDER="bricktracker_metadata_purchase_locations"."name" DESC
# BK_PURCHASE_LOCATION_DEFAULT_ORDER="bricktracker_metadata_purchase_locations"."rowid" DESC
# Optional: Shuffle the lists on the front page.
# Default: false
@@ -203,27 +318,54 @@
# Optional: URL of the image representing a missing image in Rebrickable
# Default: https://rebrickable.com/static/img/nil.png
# BK_REBRICKABLE_IMAGE_NIL=
# BK_REBRICKABLE_IMAGE_NIL=https://rebrickable.com/static/img/nil.png
# Optional: URL of the image representing a missing minifigure image in Rebrickable
# Default: https://rebrickable.com/static/img/nil_mf.jpg
# BK_REBRICKABLE_IMAGE_NIL_MINIFIGURE=
# BK_REBRICKABLE_IMAGE_NIL_MINIFIGURE=https://rebrickable.com/static/img/nil_mf.jpg
# Optional: Pattern of the link to Rebrickable for a minifigure. Will be passed to Python .format()
# Default: https://rebrickable.com/minifigs/{figure}
# BK_REBRICKABLE_LINK_MINIFIGURE_PATTERN=
# BK_REBRICKABLE_LINK_MINIFIGURE_PATTERN=https://rebrickable.com/minifigs/{figure}
# Optional: Pattern of the link to Rebrickable for a part. Will be passed to Python .format()
# Default: https://rebrickable.com/parts/{part}/_/{color}
# BK_REBRICKABLE_LINK_PART_PATTERN=
# BK_REBRICKABLE_LINK_PART_PATTERN=https://rebrickable.com/parts/{part}/_/{color}
# Optional: Pattern of the link to Rebrickable for instructions. Will be passed to Python .format()
# Default: https://rebrickable.com/instructions/{path}
# BK_REBRICKABLE_LINK_INSTRUCTIONS_PATTERN=
# BK_REBRICKABLE_LINK_INSTRUCTIONS_PATTERN=https://rebrickable.com/instructions/{path}
# Optional: User-Agent to use when querying Rebrickable outside of the Rebrick python library
# Default: 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
# BK_REBRICKABLE_USER_AGENT=
# Optional: User-Agent to use when querying Rebrickable and Peeron outside of the Rebrick python library
# Default: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36
# BK_USER_AGENT=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36
# Legacy: User-Agent for Rebrickable (use BK_USER_AGENT instead)
# BK_REBRICKABLE_USER_AGENT=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36
# Optional: Delay in milliseconds between Peeron page downloads to avoid being potentially blocked
# Default: 1000
# BK_PEERON_DOWNLOAD_DELAY=1000
# Optional: Minimum image size (width/height) for valid Peeron instruction pages
# Images smaller than this are considered error placeholders and will be rejected
# Default: 100
# BK_PEERON_MIN_IMAGE_SIZE=100
# Optional: Pattern for Peeron instruction page URLs. Will be passed to Python .format()
# Supports {set_number} and {version_number} parameters
# Default: http://peeron.com/scans/{set_number}-{version_number}
# BK_PEERON_INSTRUCTION_PATTERN=
# Optional: Pattern for Peeron thumbnail URLs. Will be passed to Python .format()
# Supports {set_number} and {version_number} parameters
# Default: http://belay.peeron.com/thumbs/{set_number}-{version_number}/
# BK_PEERON_THUMBNAIL_PATTERN=
# Optional: Pattern for Peeron scan URLs. Will be passed to Python .format()
# Supports {set_number} and {version_number} parameters
# Default: http://belay.peeron.com/scans/{set_number}-{version_number}/
# BK_PEERON_SCAN_PATTERN=
# Optional: Display Rebrickable links wherever applicable
# Default: false
@@ -238,27 +380,39 @@
# Default: https://docs.google.com/spreadsheets/d/1rlYfEXtNKxUOZt2Mfv0H17DvK7bj6Pe0CuYwq6ay8WA/gviz/tq?tqx=out:csv&sheet=Sorted%20by%20Retirement%20Date
# BK_RETIRED_SETS_FILE_URL=
# Optional: Path to the unofficial retired sets lists
# Optional: Path to the unofficial retired sets lists, relative to '/app/' folder
# You can name it whatever you want, but content has to be a CSV
# Default: ./retired_sets.csv
# BK_RETIRED_SETS_PATH=/var/lib/bricktracker/retired_sets.csv
# Default: data/retired_sets.csv
# BK_RETIRED_SETS_PATH=data/retired_sets.csv
# Optional: Change the default order of sets. By default ordered by insertion order.
# Useful column names for this option are:
# - "rebrickable_sets"."set": set number as a string
# - "rebrickable_sets"."number": the number part of set as an integer
# - "rebrickable_sets"."version": the version part of set as an integer
# - "rebrickable_sets"."set": set number as a string (e.g., "10255-1")
# - "rebrickable_sets"."number": the number part of set as text (e.g., "10255")
# - "rebrickable_sets"."version": the version part of set as an integer (e.g., 1)
# - "rebrickable_sets"."name": set name
# - "rebrickable_sets"."year": set release year
# - "rebrickable_sets"."number_of_parts": set number of parts
# - "total_missing": number of missing parts
# - "total_minifigures": number of minifigures
# - "bricktracker_sets"."purchase_date": purchase date (as REAL/Julian day)
# - "bricktracker_sets"."purchase_price": purchase price
# - "total_missing": number of missing parts (composite field)
# - "total_damaged": number of damaged parts (composite field)
# - "total_minifigures": number of minifigures (composite field)
# Default: "rebrickable_sets"."number" DESC, "rebrickable_sets"."version" ASC
# BK_SETS_DEFAULT_ORDER="rebrickable_sets"."year" ASC
# Examples:
# BK_SETS_DEFAULT_ORDER="rebrickable_sets"."year" DESC, "rebrickable_sets"."name" ASC
# BK_SETS_DEFAULT_ORDER="rebrickable_sets"."number_of_parts" DESC
# BK_SETS_DEFAULT_ORDER="total_missing" DESC, "rebrickable_sets"."year" ASC
# Optional: Folder where to store the sets images, relative to the '/app/static/' folder
# Default: sets
# BK_SETS_FOLDER=sets
# Optional: Folder where to store the sets images, relative to '/app/' folder
# Default: data/sets
# BK_SETS_FOLDER=data/sets
# Optional: Enable set consolidation/grouping on the main sets page
# When enabled, multiple copies of the same set are grouped together showing instance count
# When disabled, each set copy is displayed individually (original behavior)
# Default: false
# BK_SETS_CONSOLIDATION=true
# Optional: Make the grid filters displayed by default, rather than collapsed
# Default: false
@@ -268,10 +422,18 @@
# Default: false
# BK_SHOW_GRID_SORT=true
# Optional: Skip saving or displaying spare parts
# Optional: Show duplicate filter button on sets page
# Default: true
# BK_SHOW_SETS_DUPLICATE_FILTER=true
# Optional: Skip importing spare parts when downloading sets from Rebrickable
# Default: false
# BK_SKIP_SPARE_PARTS=true
# Optional: Hide spare parts from parts lists (spare parts must still be in database)
# Default: false
# BK_HIDE_SPARE_PARTS=true
# Optional: Namespace of the Socket.IO socket
# Default: bricksocket
# BK_SOCKET_NAMESPACE=customsocket
@@ -282,18 +444,21 @@
# Optional: Change the default order of storages. By default ordered by insertion order.
# Useful column names for this option are:
# - "bricktracker_metadata_storages"."name" ASC: storage name
# - "bricktracker_metadata_storages"."name": storage name
# - "bricktracker_metadata_storages"."rowid": insertion order (special column)
# Default: "bricktracker_metadata_storages"."name" ASC
# BK_STORAGE_DEFAULT_ORDER="bricktracker_metadata_storages"."name" ASC
# Examples:
# BK_STORAGE_DEFAULT_ORDER="bricktracker_metadata_storages"."name" DESC
# BK_STORAGE_DEFAULT_ORDER="bricktracker_metadata_storages"."rowid" DESC
# Optional: URL to the themes.csv.gz on Rebrickable
# Default: https://cdn.rebrickable.com/media/downloads/themes.csv.gz
# BK_THEMES_FILE_URL=
# Optional: Path to the themes file
# Optional: Path to the themes file, relative to '/app/' folder
# You can name it whatever you want, but content has to be a CSV
# Default: ./themes.csv
# BK_THEMES_PATH=/var/lib/bricktracker/themes.csv
# Default: data/themes.csv
# BK_THEMES_PATH=data/themes.csv
# Optional: Timezone to use to display datetimes
# Check your system for available timezone/TZ values
@@ -305,11 +470,44 @@
# Default: false
# BK_USE_REMOTE_IMAGES=true
# Optional: Change the default order of sets. By default ordered by insertion order.
# Optional: Change the default order of wishlist sets. By default ordered by insertion order.
# Useful column names for this option are:
# - "bricktracker_wishes"."set": set number as a string
# - "bricktracker_wishes"."set": set number as a string (e.g., "10255-1")
# - "bricktracker_wishes"."name": set name
# - "bricktracker_wishes"."year": set release year
# - "bricktracker_wishes"."number_of_parts": set number of parts
# - "bricktracker_wishes"."theme_id": theme ID
# - "bricktracker_wishes"."rowid": insertion order (special column)
# Default: "bricktracker_wishes"."rowid" DESC
# BK_WISHES_DEFAULT_ORDER="bricktracker_wishes"."set" DESC
# Examples:
# BK_WISHES_DEFAULT_ORDER="bricktracker_wishes"."year" DESC, "bricktracker_wishes"."name" ASC
# BK_WISHES_DEFAULT_ORDER="bricktracker_wishes"."number_of_parts" DESC
# BK_WISHES_DEFAULT_ORDER="bricktracker_wishes"."set" ASC
# Optional: Show collection growth charts on the statistics page
# Default: true
# BK_STATISTICS_SHOW_CHARTS=false
# Optional: Default state of statistics page sections (expanded or collapsed)
# When true, all sections start expanded. When false, all sections start collapsed.
# Default: true
# BK_STATISTICS_DEFAULT_EXPANDED=false
# Optional: Enable dark mode by default
# When true, the application starts in dark mode.
# Default: false
# BK_DARK_MODE=true
# Optional: Customize badge order for Grid view (set cards on /sets/)
# Comma-separated list of badge keys in the order they should appear
# Available badges: theme, tag, year, parts, instance_count, total_minifigures,
# total_missing, total_damaged, owner, storage, purchase_date, purchase_location,
# purchase_price, instructions, rebrickable, bricklink
# Default: theme,year,parts,total_minifigures,owner
# BK_BADGE_ORDER_GRID=theme,year,parts,total_minifigures,owner,storage
# Optional: Customize badge order for Detail view (individual set details page)
# Comma-separated list of badge keys in the order they should appear
# Use the same badge keys as BK_BADGE_ORDER_GRID
# Default: theme,tag,year,parts,instance_count,total_minifigures,total_missing,total_damaged,owner,storage,purchase_date,purchase_location,purchase_price,instructions,rebrickable,bricklink
# BK_BADGE_ORDER_DETAIL=theme,tag,year,parts,owner,storage,purchase_date,rebrickable,bricklink
+13
View File
@@ -17,12 +17,25 @@ static/sets/
# IDE
.vscode/
*.code-workspace
# Temporary
*.csv
/local/
run_local.sh
settings.local.json
/offline/
# Apple idiocy
.DS_Store
# Documentation
docusaurus/
vitepress/
# Local data
offline/
data/
# Hidden folders
.?*
+474 -2
View File
@@ -1,8 +1,480 @@
# Changelog
## Unreleased
## 1.4.1
### 1.2.4
### Enhancements
- **"Reset to Defaults" confirmation now uses a Bootstrap modal instead of a browser dialog**: Replaced the native browser `confirm()` popup with a consistent Bootstrap modal matching the style of BrickTracker
### Bug Fixes
- **Fixed prices on the Statistics page being rounded to whole numbers** (Issue #146): All price values now display with two decimal places (`%.2f`) instead of being rounded to whole numbers (`%.0f`)
- **Fixed "Reset to Defaults" blanking all settings instead of restoring them** (Issue #149a, branch `bugfix/issue-149a`): "Reset to Defaults" was clearing all fields to empty/false instead of populating them with their actual default values
- `resetToDefaults()` now reads from `window.DEFAULT_CONFIG` and restores each field to its proper default, matching the same logic used on initial page load
- **Fixed `BK_INSTRUCTIONS_ALLOWED_EXTENSIONS` being treated as a string instead of a list** (Issue #149b, branch `bugfix/issue-149b`): When this setting was saved via the admin panel, it was stored and cast as a plain string rather than a list, causing it to be iterated character by character (e.g. `['.', 'p', 'd', 'f']` instead of `['.pdf']`)
- Added `allowed_extensions` to the list-type keyword detection in `_cast_value()`, matching the existing pattern used for `badge_order` settings
- **Fixed crash when importing sets containing minifigures or parts with no image on Rebrickable** (Issue #149c, branch `bugfix/issue-149c`): Adding or refreshing a set would fail entirely if any minifigure or part had no image URL, with error `Invalid URL '': No scheme supplied`
- Rebrickable returns an empty string (not `None`) for missing images; normalize empty strings to `None` at the point of ingestion in `rebrickable_minifigure.py` and `individual_minifigure.py`, matching the existing pattern in `rebrickable_set.py`
- Updated `rebrickable_image.py` to treat empty strings the same as `None` throughout, falling back to the configured nil placeholder image
- Note: the originally reported sets could no longer reproduce the crash (images may have since been added on Rebrickable), so this fix is based on assumptions only
- **Fixed previously added set being re-added when adding an individual minifigure** (Issue #150, branch `bugfix/issue-150`): After adding a set, entering a `fig-` ID and confirming would add the previous set again instead of the minifigure, if user did not reload inbetween.
- `add.js` was creating a second `BrickMinifigureSocket` with its own listeners on the same button and input as `BrickSetSocket`, causing duplicate socket events and cross-socket state confusion
- **Fixed purchase date, price, and notes not being saved when adding an individual minifigure** (Issue #151, branch `bugfix/issue-151`): Filling in purchase date, price, or notes before clicking Add had no effect, only purchase location was saved
- `BrickMinifigureSocket` was missing references to `#add-purchase-date`, `#add-purchase-price`, and `#add-description`, so those fields were never read or included in the socket emit
- The backend already supported all three fields. This was just a frontend error
- **Fixed purchase date and price not being converted when adding an individual minifigure** (Issue #151 follow-up): Purchase date was stored as a raw `YYYY/MM/DD` string and price as a raw string instead of a Unix epoch float and float respectively, causing them to be silently dropped from statistics aggregations
- `IndividualMinifigure.download()` now mirrors the conversion logic already present in `BrickSet.download()`: date parsed via `datetime.strptime` to timestamp, price cast to `float`
- **Fixed a price of 0 being treated as no price** (Issue #153): Setting a purchase price of `0` on sets, individual minifigures, or parts was indistinguishable from having no price set at all
- Replaced truthiness checks (`if price`, `price or ''`) with explicit `None` checks throughout badge display, management input fields, and inline price update endpoints
- **Fixed deleting a wish with an owner assigned** (Issue #152, branch `bugfix/issue-152`): Resolved foreign key constraint error when removing a set from the wishlist that had an owner assigned
- Wish owners are now deleted before the wish itself, respecting the FK constraint
## 1.4
### Bug Fixes
- **Fixed client-side table sorting corruption** (Issue #136): Resolved data corruption when using sort buttons with DataTables header sorting in client-side pagination mode
- Sort buttons now trigger actual table header clicks instead of using separate `columns.sort()`
- Header clicks sync button states to match current sort
- Prevents misaligned images, colors, and links when mixing sorting methods
- **Fixed storage deletion error handling**: Added proper validation and user-friendly error messages when attempting to delete storage locations that are still in use
- Shows detailed count of items using the storage (sets, individual minifigures, individual parts, part lots)
- Provides clickable link to storage details page for easy navigation
- Prevents accidental deletion of storage locations with referenced items
- **Fixed bulk parts redirect**: Corrected endpoint reference from `individual_part.list_all` to `individual_part.list` after route function rename
- **Fixed purchase location templates**: Created missing template files for purchase location pages
- **Fixed set refresh functionality**: Resolved issues with refreshing sets from Rebrickable
- Fixed foreign key constraint errors during refresh by reusing existing set IDs instead of generating new UUIDs
- Implemented UPDATE-then-INSERT pattern to properly update existing parts while preserving user tracking data
- Part quantities now correctly sync with Rebrickable during refresh
- User tracking data (`checked`, `missing`, `damaged`) is preserved across refreshes
- New parts from Rebrickable are added to local inventory during refresh
- Orphaned parts (parts no longer in Rebrickable's inventory) are now properly removed during refresh
- Refresh now works correctly for both set parts and minifigure parts
- Uses temporary tracking table to identify which parts are still valid before cleanup
- **Fixed Socket.IO connections behind reverse proxies**: Resolved WebSocket disconnection issues when using Traefik, Nginx, or other reverse proxies
- Root cause: Setting `BK_DOMAIN_NAME` enables strict CORS checking that fails with reverse proxies
- Solution: Leave `BK_DOMAIN_NAME` empty for reverse proxy deployments (allows all origins by default)
- Added debug logging for Socket.IO connections to help troubleshoot proxy issues
- **Fixed bulk import hanging on empty set numbers**: Resolved issue where trailing commas in bulk import input would cause infinite loops
- Empty strings from trailing commas (e.g., `"10312, 21348, "`) are now filtered out before processing
- Prevents "Set number cannot be empty" errors from blocking the bulk import queue
- **Added notes display toggles**: Added configuration options to show/hide notes on grid and detail views
- New `BK_SHOW_NOTES_GRID` setting (default: `false`) - controls whether notes appear on grid view cards
- New `BK_SHOW_NOTES_DETAIL` setting (default: `true`) - controls whether notes appear on set detail pages
- Notes display as an info alert box below badges when enabled
- Both settings can be toggled in Admin -> Live Settings panel without container restart
- Fixed consolidated SQL query to include description field for proper notes display in server-side pagination
- **Fixed permission denied when running as non-root user** (Issue #138): Resolved container startup failure when using `user:` directive in docker-compose
- Added `chmod -R a+rX /app` to Dockerfile to ensure all files are readable regardless of build environment
- Added commented `user:` example in `compose.yaml` to document non-root support
### Breaking Changes
- **Parts default order column names changed**: The `BK_PARTS_DEFAULT_ORDER` environment variable now uses `"combined"` instead of `"bricktracker_parts"` for column references
- If you have a custom `BK_PARTS_DEFAULT_ORDER` setting, update column references:
- `"bricktracker_parts"."spare"``"combined"."spare"`
- `"bricktracker_parts"."part"``"combined"."part"`
- `"bricktracker_parts"."quantity"``"combined"."quantity"`
- Or remove the custom setting to use the new defaults
- See `.env.sample` for the full list of available column names
### New Features
- **Sortable Checked column** (Issue #137): The "Checked" column in set inventory tables can now be sorted
- Click the "Checked" header to sort by checked/unchecked status
- Works in both parts table and part lots table
- **Quick-add individual parts toggle**: New `BK_HIDE_QUICK_ADD_INDIVIDUAL_PARTS` setting to hide the quick-add menu in set parts tables
- Hides the "Add to individual parts" option in the row menu dropdown
- Useful when you want individual parts tracking enabled but don't need quick-add from set inventory
- **Individual Minifigures Tracking**
- Track loose/individual minifigures outside of sets
- Part-level tracking for individual minifigures with problem states (missing/damaged/checked)
- Complete metadata support (owners, tags, statuses, storage, purchase info)
- Purchase tracking with date, location, and price
- Quick navigation from set minifigures to individual instances
- Filter and search capabilities
- Feature flags:
- `BK_HIDE_INDIVIDUAL_MINIFIGURES`: Hides individual minifigures UI elements (navbar menu item, links from minifigure detail pages)
- `BK_DISABLE_INDIVIDUAL_MINIFIGURES`: Enables read-only mode - all individual minifigure pages remain accessible but with all editing fields disabled (quantity, parts table, metadata inputs), delete buttons hidden, and write operations blocked.
- **Individual Parts Tracking**
- Track loose parts outside of sets and minifigures
- Quick-add functionality from set parts tables
- Complete metadata support (owners, tags, storage, purchase info)
- Problem tracking (missing/damaged/checked states)
- Purchase tracking with date, location, and price
- Bulk part addition interface
- Feature flags:
- `BK_HIDE_INDIVIDUAL_PARTS`: Hides individual parts UI elements (navbar menu item, "Add Parts" button, links from part detail pages)
- `BK_DISABLE_INDIVIDUAL_PARTS`: Enables read-only mode - all individual parts and lot pages remain accessible but with all editing fields disabled (quantity, missing/damaged, parts table, metadata inputs), delete buttons hidden, "Add Parts" menu item removed, and write operations blocked. The /add/ page also hides the "Adding individual parts?" section.
- **Part Lots System**
- Organize individual parts into logical lots/collections
- Lot-level metadata (name, description, created date)
- Shared metadata across lot (storage, purchase info)
- View all parts in a lot with filtering
- **Purchase Location Management**
- Centralized purchase location tracking for sets, individual minifigures, parts, and lots
- New purchase location management page (`/purchase-locations/`)
- Track which items were purchased from each location
- Integrated with existing storage and owner metadata systems
- **Rebrickable Color Database**
- Caches color information from Rebrickable API
- Provides BrickLink color ID mapping
- Reduces repeated API calls for color data
- Supports export functionality with correct color IDs
- **Export Functionality**
- Added export system in admin panel for sets, parts, and problem parts
- Export accordion in `/admin/` with three main categories:
- **Export Sets**: Rebrickable CSV format for collection tracking
- **Export All Parts**: Three formats available:
- Rebrickable CSV (Part, Color, Quantity)
- LEGO Pick-a-Brick CSV (elementId, quantity)
- BrickLink XML (wanted list format)
- **Export Missing/Damaged Parts**: Same three formats as parts exports
- All exports aggregate quantities automatically (parts by part+color, LEGO by element ID)
- BrickLink exports use proper BrickLink part numbers and color IDs when available
- Format information displayed in UI for user guidance
- **Badge Order Customization**
- Added customizable badge ordering for set cards and detail pages
- Separate configurations for Grid view (`/sets/` cards) and Detail view (individual set pages)
- Configure via environment variables in `.env` file:
- `BK_BADGE_ORDER_GRID`: Comma-separated badge keys for grid view (default: theme,year,parts,total_minifigures,owner)
- `BK_BADGE_ORDER_DETAIL`: Comma-separated badge keys for detail view (default: all 16 badges)
- Can also be configured via Live Settings page in admin panel under "Default Ordering & Formatting"
- Changes apply immediately without restart when edited via admin panel
- 16 available badge types: theme, tag, year, parts, instance_count, total_minifigures, total_missing, total_damaged, owner, storage, purchase_date, purchase_location, purchase_price, instructions, rebrickable, bricklink
- **Front Page Parts Display**
- Added latest/random parts section to the front page alongside sets and minifigures
- Shows 6 parts with quantity badges and other relevant information
- Respects `BK_RANDOM` configuration (random selection when enabled, latest when disabled)
- Respects `BK_HIDE_SPARE_PARTS` configuration
- Respects `BK_HIDE_ALL_PARTS` configuration for "All parts" button visibility
- **NOT Filter Toggle Buttons**
- Added toggle buttons next to all filter dropdowns to switch between "equals" and "not equals" modes
- Visual feedback: Button displays red with "not equals" icon (≠) when in NOT mode
- Works with all filter types: Status, Theme, Owner, Storage, Purchase Location, Tag, and Year
- Supports both client-side and server-side pagination modes
- Filter chains persist NOT states across page reloads via URL parameters (e.g., `?theme=-frozen&status=-has-missing`)
- Clear filters button resets all toggle states to equals mode
- Enables complex filter combinations like "Show me 2025 sets that are NOT Frozen theme AND have missing pieces"
- **Notes/Comments Field**
- Added general notes field to set details for storing custom notes and comments
- Accessible via Management -> Notes accordion section on set detail pages
- Auto-save functionality with visual feedback (save icon updates on change)
- Notes display prominently below badges on set cards when populated
- Supports multi-line text input with configurable row height
- Clear button to quickly remove notes
- **Bulk Set Refresh**
- Added batch refresh functionality for updating multiple sets at once
- New "Bulk Refresh" button appears on Admin -> Sets needing refresh page
- Pre-populates text-area with comma-separated list of all sets needing refresh
- Follows same pattern as bulk add with progress tracking and set card preview
- Shows real-time progress with current set being processed
- Failed sets remain in input field for easy retry
### Database Improvements
- **Standardized ON DELETE Behavior**: Unified foreign key deletion handling across all metadata tables
- All metadata foreign keys now use RESTRICT (prevent deletion if referenced)
- Prevents accidental deletion of storage locations or purchase locations that are in use
- **Performance Indexes Added**: New composite indexes for common query patterns
- `idx_individual_parts_lot_id_part_color` - Optimizes listing parts within a lot
- `idx_individual_parts_missing_damaged` - Optimizes finding parts with problems
- `idx_individual_minifigure_parts_checked` - Optimizes finding unchecked parts in minifigures
- **Consolidated Metadata Tables**: Migration 0027 removes foreign key constraints from metadata junction tables
- `bricktracker_set_owners`, `bricktracker_set_tags`, `bricktracker_set_statuses` now accept any entity type
- Enables reusing metadata tables for sets, individual minifigures, individual parts, and lots
- **Fixed Schema Drop Script**: Resolved foreign key constraint errors during database reset
- Added proper table drop ordering (children before parents)
- Implemented `PRAGMA foreign_keys OFF/ON` wrapping
- Includes all new tables from migrations 0021-0027
### Configuration & Environment Variables
- **New Configuration Options**:
- `BK_HIDE_INDIVIDUAL_MINIFIGURES` - Hide individual minifigures UI elements in navigation
- `BK_DISABLE_INDIVIDUAL_MINIFIGURES` - Block write operations for individual minifigures (view-only mode)
- `BK_HIDE_INDIVIDUAL_PARTS` - Hide individual parts UI elements in navigation
- `BK_DISABLE_INDIVIDUAL_PARTS` - Block write operations for individual parts (view-only mode)
- `BK_BADGE_ORDER_GRID` - Customize badge order on set cards in grid view (comma-separated list)
- `BK_BADGE_ORDER_DETAIL` - Customize badge order on set detail pages (comma-separated list)
- `BK_SHOW_NOTES_GRID` - Show notes on set cards in grid view (default: false)
- `BK_SHOW_NOTES_DETAIL` - Show notes on set detail pages (default: true)
- All new settings support live configuration updates via Admin panel
### Technical Improvements
- **Route Protection Decorators**: New decorator pattern for feature flag enforcement
- `@require_individual_minifigures_write` - Blocks writes when feature is disabled
- `@require_individual_parts_write` - Blocks writes when feature is disabled
- Allows viewing existing data while preventing new additions
- **SQL Query Organization**: New query directory structure for individual features
- `bricktracker/sql/individual_minifigure/` - All individual minifigure queries
- `bricktracker/sql/individual_part/` - All individual part queries
- `bricktracker/sql/individual_part_lot/` - All part lot queries
- `bricktracker/sql/rebrickable_colors/` - Color reference queries
- `bricktracker/sql/rebrickable_parts/` - Part reference queries
- **Database Migrations**: 7 new migrations (0021-0027)
- 0021: Individual minifigures and parts tables
- 0022: Individual part lots system with proper foreign keys
- 0023: Performance indexes for individual features
- 0024: Rebrickable colors cache table
- 0025: Additional composite indexes for query optimization
- 0026: Standardized ON DELETE behavior across metadata tables
- 0027: Consolidated metadata tables (remove FK constraints)
## 1.3.1
### New Functionality
- **Database Integrity Check and Cleanup**
- Added database integrity scanner to detect orphaned records and foreign key violations
- New "Check Database Integrity" button in admin panel scans for issues
- Detects orphaned sets, parts, and parts with missing set references
- Warning prompts users to backup database before cleanup
- Cleanup removes all orphaned records in one operation
- Detailed scan results show affected records with counts and descriptions
- **Database Optimization**
- Added "Optimize Database" button to re-create performance indexes
- Safe to run after database imports or restores
- Re-creates all indexes from migration #19 using `CREATE INDEX IF NOT EXISTS`
- Runs `ANALYZE` to rebuild query statistics
- Runs `PRAGMA optimize` for additional query plan optimization
- Helpful after importing backup databases that may lack performance optimizations
### Bug Fixes
- **Fixed foreign key constraint errors during set imports**: Resolved `FOREIGN KEY constraint failed` errors when importing sets with parts and minifigures
- Fixed insertion order in `bricktracker/part.py`: Parent records (`rebrickable_parts`) now inserted before child records (`bricktracker_parts`)
- Fixed insertion order in `bricktracker/minifigure.py`: Parent records (`rebrickable_minifigures`) now inserted before child records (`bricktracker_minifigures`)
- Ensures foreign key references are valid when SQLite checks constraints
- **Fixed set metadata updates**: Owner, status, and tag checkboxes now properly persist changes on set details page
- Fixed `update_set_state()` method to commit database transactions (was using deferred execution without commit)
- All metadata updates (owner, status, tags, storage, purchase info) now work consistently
- **Fixed nil image downloads**: Placeholder images for parts and minifigures without images now download correctly
- Removed early returns that prevented nil image downloads
- Nil images now properly saved to configured folders (e.g., `/app/data/parts/nil.jpg`)
- **Fixed error logging for missing files**: File not found errors now show actual configured folder paths instead of just URL paths
- Added detailed logging showing both file path and configured folder for easier debugging
- **Fixed minifigure filters in client-side pagination mode**: Owner and other filters now work correctly when server-side pagination is disabled
- Aligned filter behavior with parts page (applies filters server-side, then loads filtered data for client-side search)
## 1.3
### Breaking Changes
#### Data Folder Consolidation
> **Warning**
> **BREAKING CHANGE**: Version 1.3 consolidates all user data into a single `data/` folder for easier backup and volume mapping.
- **Path handling**: All relative paths are now resolved relative to the application root (`/app` in Docker)
- Example: `data/app.db``/app/data/app.db`
- **New default paths** (automatically used for new installations):
- Database: `data/app.db` (was: `app.db` in root)
- Configuration: `data/.env` (was: `.env` in root) - *optional, backward compatible*
- CSV files: `data/*.csv` (was: `*.csv` in root)
- Images/PDFs: `data/{sets,parts,minifigures,instructions}/` (was: `static/*`)
- **Configuration file (.env) location**:
- New recommended location: `data/.env` (included in data volume, settings persist)
- Backward compatible: `.env` in root still works (requires volume mount for admin panel persistence)
- Priority: `data/.env` > `.env` (automatic detection, no migration required)
- **Migration options**:
1. **Migrate to new structure** (recommended - single volume for all data including .env)
2. **Keep current setup** (backward compatible - old paths continue to work)
See [Migration Guide](docs/migration_guide.md) for detailed instructions
#### Default Minifigures Folder Change
> **Warning**
> **BREAKING CHANGE**: Default minifigures folder path changed from `minifigs` to `minifigures`
- **Impact**: Users who relied on the default `BK_MINIFIGURES_FOLDER` value (without explicitly setting it) will need to either:
1. Set `BK_MINIFIGURES_FOLDER=minifigs` in their environment to maintain existing behavior, or
2. Rename their existing `minifigs` folder to `minifigures`
- **No impact**: Users who already have `BK_MINIFIGURES_FOLDER` explicitly configured
- Improved consistency across documentation and Docker configurations
### New Features
- **Live Settings changes**
- Added live environment variable configuration management system
- Configuration Management interface in admin panel with live preview and badge system
- **Live settings**: Can be changed without application restart (menu visibility, table display, pagination, features)
- **Static settings**: Require restart but can be edited and saved to .env file (authentication, server, database, API keys)
- Advanced badge system showing value status: True/False for booleans, Set/Default/Unset for other values, Changed indicator
- Live API endpoints: `/admin/api/config/update` for immediate changes, `/admin/api/config/update-static` for .env updates
- Form pre-population with current values and automatic page reload after successful live updates
- Fixed environment variable lock detection in admin configuration panel
- Resolved bug where all variables appeared "locked" after saving live settings
- Lock detection now correctly identifies only Docker environment variables set before .env loading
- Variables set via Docker's `environment:` directive remain properly locked
- Variables from data/.env or root .env are correctly shown as editable
- Added configuration persistence warning in admin panel
- Warning banner shows when using .env in root (non-persistent)
- Success banner shows when using data/.env (persistent)
- Provides migration instructions directly in the UI
- **Spare Parts**
- Added spare parts control options
- `BK_SKIP_SPARE_PARTS`: Skip importing spare parts when downloading sets from Rebrickable (parts not saved to database)
- `BK_HIDE_SPARE_PARTS`: Hide spare parts from all parts lists (parts must still be in database)
- Both options are live-changeable in admin configuration panel
- Options can be used independently or together for flexible spare parts management
- Affects all parts displays: /parts page, set details accordion, minifigure parts, and problem parts
- **Pagination**
- Added individual pagination control system per entity type
- `BK_SETS_SERVER_SIDE_PAGINATION`: Enable/disable pagination for sets
- `BK_PARTS_SERVER_SIDE_PAGINATION`: Enable/disable pagination for parts
- `BK_MINIFIGURES_SERVER_SIDE_PAGINATION`: Enable/disable pagination for minifigures
- Device-specific pagination sizes (desktop/mobile) for each entity type
- Supports search, filtering, and sorting in both server-side and client-side modes
- **Peeron Instructions**
- Added Peeron instructions integration
- Full image caching system with automatic thumbnail generation
- Optimized HTTP calls by downloading full images once and generating thumbnails locally
- Automatic cache cleanup after PDF generation to save disk space
- **Parts checkmark**
- Added parts checking/inventory system
- New "Checked" column in parts tables for tracking inventory progress
- Checkboxes to mark parts as verified during set walkthrough
- `BK_HIDE_TABLE_CHECKED_PARTS`: Environment variable to hide checked column
- **Set Consolidation**
- Added set consolidation/grouping functionality
- Automatic grouping of duplicate sets on main sets page
- Shows instance count with stack icon badge (e.g., "3 copies")
- Expandable drawer interface to view all set copies individually
- Full set cards for each instance with all badges, statuses, and functionality
- `BK_SETS_CONSOLIDATION`: Environment variable to enable/disable consolidation (default: false)
- Backwards compatible - when disabled, behaves exactly like original individual view
- Improved theme filtering: handles duplicate theme names correctly
- Fixed set number sorting: proper numeric sorting in both ascending and descending order
- Mixed status indicators for consolidated sets: three-state checkboxes (unchecked/partial/checked) with count badges
- Template logic handles three states: none (0/2), all (2/2), partial (1/2) with visual indicators
- Purple overlay styling for partial states, disabled checkboxes for read-only consolidated status display
- Individual sets maintain full interactive checkbox functionality
- **Statistics**
- Added comprehensive statistics system (#91)
- New Statistics page with collection analytics
- Financial overview: total cost, average price, price range, investment tracking
- Collection metrics: total sets, unique sets, parts count, minifigures count
- Theme distribution statistics with clickable drill-down to filtered sets
- Storage location statistics showing sets per location with value calculations
- Purchase location analytics with spending patterns and date ranges
- Problem tracking: missing and damaged parts statistics
- Clickable numbers throughout statistics that filter to relevant sets
- `BK_HIDE_STATISTICS`: Environment variable to hide statistics menu item
- Year-based analytics: Sets by release year and purchases by year
- Sets by Release Year: Shows collection distribution across LEGO release years
- Purchases by Year: Tracks spending patterns and acquisition timeline
- Year summary with peak collection/spending years and timeline insights
- Enhanced statistics interface and functionality
- Collapsible sections: All statistics sections have clickable headers to expand/collapse
- Collection growth charts: Line charts showing sets, parts, and minifigures over time
- Configuration options: `BK_STATISTICS_SHOW_CHARTS` and `BK_STATISTICS_DEFAULT_EXPANDED` environment variables
- **Admin Page Section Expansion**
- Added configurable admin page section expansion
- `BK_ADMIN_DEFAULT_EXPANDED_SECTIONS`: Environment variable to specify which sections expand by default
- Accepts comma-separated list of section names (e.g., "database,theme,instructions")
- Valid sections: authentication, instructions, image, theme, retired, metadata, owner, purchase_location, status, storage, tag, database
- URL parameters take priority over configuration (e.g., `?open_database=1`)
- Database section expanded by default to maintain original behavior
- Smart metadata handling: sub-section expansion automatically expands parent metadata section
- **Duplicate Sets filter**
- Added duplicate sets filter functionality
- New filter button on Sets page to show only duplicate/consolidated sets
- `BK_SHOW_SETS_DUPLICATE_FILTER`: Environment variable to show/hide the filter button (default: true)
- Works with both server-side and client-side pagination modes
- Consolidated mode: Shows sets that have multiple instances
- Non-consolidated mode: Shows sets that appear multiple times in collection
- **Bricklink Links**
- Added BrickLink links for sets
- BrickLink badge links now appear on set cards and set details pages alongside Rebrickable links
- `BK_BRICKLINK_LINK_SET_PATTERN`: New environment variable for BrickLink set URL pattern (default: https://www.bricklink.com/v2/catalog/catalogitem.page?S={set_num})
- Controlled by existing `BK_BRICKLINK_LINKS` environment variable
- **Dark Mode**
- Added dark mode support
- `BK_DARK_MODE`: Environment variable to enable dark mode theme (default: false)
- Uses Bootstrap 5.3's native dark mode with `data-bs-theme` attribute
- Live-changeable via Admin > Live Settings
- Setting persists across sessions via .env file
- **Alphanumetic Set Number**
- Added alphanumeric set number support
- Database schema change: Set number column changed from INTEGER to TEXT
- Supports LEGO promotional and special edition sets with letters in their numbers
- Examples: "McDR6US-1", "COMCON035-1", "EG00021-1"
### Improvements
- Improved WebSocket/Socket.IO reliability for mobile devices
- Changed connection strategy to polling-first with automatic WebSocket upgrade
- Increased connection timeout to 30 seconds for slow mobile networks
- Added ping/pong keepalive settings (30s timeout, 25s interval)
- Improved server-side connection logging with user agent and transport details
- Fixed dynamic sort icons across all pages
- Sort icons now properly toggle between ascending/descending states
- Improved DataTable integration
- Disabled column header sorting when server-side pagination is enabled
- Prevents conflicting sort mechanisms between DataTable and server-side sorting
- Enhanced color dropdown functionality
- Automatic merging of duplicate color entries with same color_id
- Keeps entries with valid RGB data, removes entries with None/empty RGB
- Preserves selection state during dropdown consolidation
- Consistent search behavior (instant for client-side, Enter key for server-side)
- Mobile-friendly pagination navigation
- Added performance optimization
- SQLite WAL Mode:
- Increased cache size to 10,000 pages (~40MB) for faster query execution
- Set temp_store to memory for accelerated temporary operations
- Enabled foreign key constraints and optimized synchronous mode
- Added ANALYZE for improved query planning and statistics
- Database Indexes (Migration 0019):
- High-impact composite index for problem parts aggregation (`idx_bricktracker_parts_id_missing_damaged`)
- Parts lookup optimization (`idx_bricktracker_parts_part_color_spare`)
- Set storage filtering (`idx_bricktracker_sets_set_storage`)
- Search optimization with case-insensitive indexes (`idx_rebrickable_sets_name_lower`, `idx_rebrickable_parts_name_lower`)
- Year and theme filtering optimization (`idx_rebrickable_sets_year`, `idx_rebrickable_sets_theme_id`)
- Additional indexes for purchase dates, quantities, sorting, and minifigures aggregation
- Statistics Query Optimization:
- Replaced separate subqueries with efficient CTEs (Common Table Expressions)
- Consolidated aggregations for set, part, minifigure, and financial statistics
- Added default image handling for sets without images
- Sets with null/missing images from Rebrickable API now display placeholder image
- Automatic fallback to nil.png from parts folder for set previews
- Copy of nil placeholder saved as set image for consistent display across all routes
- Prevents errors when downloading sets that have no set_img_url in API response
- Fixed instructions download from Rebrickable
- Replaced cloudscraper with standard requests library
- Resolves 403 Forbidden errors when downloading instruction PDFs
- Fixed instructions display and URL generation
- Fixed "Open PDF" button links to use correct data route
- Corrected path resolution for data/instructions folder
- Fixed instruction listing page to scan correct folder location
- Fixed Peeron PDF creation to use correct data folder path
- Fixed foreign key constraint error when adding sets
- Rebrickable set is now inserted before BrickTracker set to satisfy FK constraints
- Resolves "FOREIGN KEY constraint failed" error when adding sets
- Fixed atomic transaction handling for set downloads
- All database operations during set addition now use deferred execution
- Ensures all-or-nothing behavior: if any part fails (set info, parts, minifigs), nothing is committed
- Prevents partial set additions that would leave the database in an inconsistent state
- Metadata updates (owners, tags) now defer until final commit
## 1.2.4
> **Warning**
> To use the new BrickLink color parameter in URLs, update your `.env` file:
+11 -2
View File
@@ -2,10 +2,19 @@ FROM python:3-slim
WORKDIR /app
# Copy requirements first (so pip install can be cached)
COPY requirements.txt .
# Python library requirements
RUN pip install --no-cache-dir -r requirements.txt
# Bricktracker
COPY . .
# Python library requirements
RUN pip --no-cache-dir install -r requirements.txt
# Ensure all files are readable by non-root users (supports user: directive in compose)
RUN chmod -R a+rX /app
# Set executable permissions for entrypoint script
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
+8 -10
View File
@@ -1,9 +1,13 @@
<img src="static/brick.png" height="100" width="100">
# BrickTracker
A web application for organizing and tracking LEGO sets, parts, and minifigures. Uses the Rebrickable API to fetch LEGO data and allows users to track missing pieces and collection status.
<a href="https://www.buymeacoffee.com/frederikb" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="41" width="174"></a>
<a href="https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=48JEEKLCGB8DJ"><img src="./docs/images/blue.svg" height="40"></a>
## Features
- Track multiple LEGO sets with their parts and minifigures
@@ -16,19 +20,13 @@ A web application for organizing and tracking LEGO sets, parts, and minifigures.
## Prefered setup: pre-build docker image
Use the provided [compose.yaml](compose.yaml) file.
See [Quick Start](https://bricktracker.baerentsen.space/quick-start) to get up and running right away.
See [Quickstart](docs/quickstart.md) to get up and running right away.
See [Setup](docs/setup.md) for a more setup guide.
## Usage
See [first steps](docs/first-steps.md).
See [Walk Through](https://bricktracker.baerentsen.space/tutorial-first-steps) for a more detailed guide.
## Documentation
Most of the pages should be self explanatory to use.
However, you can find more specific documentation in the [documentation](docs/DOCS.md).
However, you can find more specific documentation in the [documentation](https://bricktracker.baerentsen.space/whatis).
You can find screenshots of the application in the [overview](docs/overview.md) documentation file.
You can find screenshots of the application in the [overview](https://bricktracker.baerentsen.space/overview) documentation.
+2 -2
View File
@@ -1,6 +1,6 @@
# This need to be first
import eventlet
eventlet.monkey_patch()
import gevent.monkey
gevent.monkey.patch_all()
import logging # noqa: E402
+75 -1
View File
@@ -1,6 +1,8 @@
import logging
import os
import sys
import time
from pathlib import Path
from zoneinfo import ZoneInfo
from flask import current_app, Flask, g
@@ -10,10 +12,12 @@ from bricktracker.configuration_list import BrickConfigurationList
from bricktracker.login import LoginManager
from bricktracker.navbar import Navbar
from bricktracker.sql import close
from bricktracker.template_filters import replace_query_filter
from bricktracker.version import __version__
from bricktracker.views.add import add_page
from bricktracker.views.admin.admin import admin_page
from bricktracker.views.admin.database import admin_database_page
from bricktracker.views.admin.export import admin_export_page
from bricktracker.views.admin.image import admin_image_page
from bricktracker.views.admin.instructions import admin_instructions_page
from bricktracker.views.admin.owner import admin_owner_page
@@ -24,18 +28,76 @@ from bricktracker.views.admin.status import admin_status_page
from bricktracker.views.admin.storage import admin_storage_page
from bricktracker.views.admin.tag import admin_tag_page
from bricktracker.views.admin.theme import admin_theme_page
from bricktracker.views.data import data_page
from bricktracker.views.error import error_404
from bricktracker.views.index import index_page
from bricktracker.views.individual_minifigure import individual_minifigure_page
from bricktracker.views.individual_part import individual_part_page
from bricktracker.views.instructions import instructions_page
from bricktracker.views.login import login_page
from bricktracker.views.minifigure import minifigure_page
from bricktracker.views.part import part_page
from bricktracker.views.purchase_location import purchase_location_page
from bricktracker.views.set import set_page
from bricktracker.views.statistics import statistics_page
from bricktracker.views.storage import storage_page
from bricktracker.views.wish import wish_page
def load_env_file() -> None:
"""Load .env file into os.environ with priority: data/.env > .env (root)
Also stores which BK_ variables were set via Docker environment (before loading .env)
so we can detect locked variables in the admin panel.
"""
import json
data_env = Path('data/.env')
root_env = Path('.env')
# Store which BK_ variables were already in environment BEFORE loading .env
# These are "locked" (set via Docker's environment: directive)
docker_env_vars = {k: v for k, v in os.environ.items() if k.startswith('BK_')}
# Store this in a way the admin panel can access it
# We'll use an environment variable to store the JSON list of locked var names
os.environ['_BK_DOCKER_ENV_VARS'] = json.dumps(list(docker_env_vars.keys()))
env_file = None
if data_env.exists():
env_file = data_env
logging.info(f"Loading environment from: {data_env}")
elif root_env.exists():
env_file = root_env
logging.info(f"Loading environment from: {root_env} (consider migrating to data/.env)")
if env_file:
# Simple .env parser (no external dependencies needed)
with open(env_file, 'r', encoding='utf-8') as f:
for line in f:
line = line.strip()
# Skip comments and empty lines
if not line or line.startswith('#'):
continue
# Parse key=value
if '=' in line:
key, value = line.split('=', 1)
key = key.strip()
value = value.strip()
# Remove quotes if present
if value.startswith('"') and value.endswith('"'):
value = value[1:-1]
elif value.startswith("'") and value.endswith("'"):
value = value[1:-1]
# Only set if not already in environment (environment variables take precedence)
if key not in os.environ:
os.environ[key] = value
def setup_app(app: Flask) -> None:
# Load .env file before configuration (if not already loaded by Docker Compose)
load_env_file()
# Load the configuration
BrickConfigurationList(app)
@@ -46,12 +108,14 @@ def setup_app(app: Flask) -> None:
level=logging.DEBUG,
format='[%(asctime)s] {%(filename)s:%(lineno)d} %(levelname)s - %(message)s', # noqa: E501
)
logging.getLogger().setLevel(logging.DEBUG)
else:
logging.basicConfig(
stream=sys.stdout,
level=logging.INFO,
format='[%(asctime)s] %(levelname)s - %(message)s',
)
logging.getLogger().setLevel(logging.INFO)
# Load the navbar
Navbar(app)
@@ -59,7 +123,8 @@ def setup_app(app: Flask) -> None:
# Setup the login manager
LoginManager(app)
# I don't know :-)
# Configure proxy header handling for reverse proxy deployments (nginx, Apache, etc.)
# This ensures proper client IP detection and HTTPS scheme recognition
app.wsgi_app = ProxyFix(
app.wsgi_app,
x_for=1,
@@ -74,18 +139,24 @@ def setup_app(app: Flask) -> None:
# Register app routes
app.register_blueprint(add_page)
app.register_blueprint(data_page)
app.register_blueprint(index_page)
app.register_blueprint(individual_minifigure_page)
app.register_blueprint(individual_part_page)
app.register_blueprint(instructions_page)
app.register_blueprint(login_page)
app.register_blueprint(minifigure_page)
app.register_blueprint(part_page)
app.register_blueprint(purchase_location_page)
app.register_blueprint(set_page)
app.register_blueprint(statistics_page)
app.register_blueprint(storage_page)
app.register_blueprint(wish_page)
# Register admin routes
app.register_blueprint(admin_page)
app.register_blueprint(admin_database_page)
app.register_blueprint(admin_export_page)
app.register_blueprint(admin_image_page)
app.register_blueprint(admin_instructions_page)
app.register_blueprint(admin_retired_page)
@@ -121,6 +192,9 @@ def setup_app(app: Flask) -> None:
# Version
g.version = __version__
# Register custom Jinja2 filters
app.jinja_env.filters['replace_query'] = replace_query_filter
# Make sure all connections are closed at the end
@app.teardown_request
def teardown_request(_: BaseException | None) -> None:
+48 -8
View File
@@ -10,36 +10,60 @@ from typing import Any, Final
CONFIG: Final[list[dict[str, Any]]] = [
{'n': 'AUTHENTICATION_PASSWORD', 'd': ''},
{'n': 'AUTHENTICATION_KEY', 'd': ''},
# BrickLink minifigure links disabled - Rebrickable doesn't provide BrickLink minifigure IDs
# {'n': 'BRICKLINK_LINK_MINIFIGURE_PATTERN', 'd': 'https://www.bricklink.com/v2/catalog/catalogitem.page?M={figure}'}, # noqa: E501
{'n': 'BRICKLINK_LINK_PART_PATTERN', 'd': 'https://www.bricklink.com/v2/catalog/catalogitem.page?P={part}&C={color}'}, # noqa: E501
{'n': 'BRICKLINK_LINK_SET_PATTERN', 'd': 'https://www.bricklink.com/v2/catalog/catalogitem.page?S={set_num}'}, # noqa: E501
{'n': 'BRICKLINK_LINKS', 'c': bool},
{'n': 'DATABASE_PATH', 'd': './app.db'},
{'n': 'DATABASE_PATH', 'd': 'data/app.db'},
{'n': 'DATABASE_TIMESTAMP_FORMAT', 'd': '%Y-%m-%d-%H-%M-%S'},
{'n': 'DEBUG', 'c': bool},
{'n': 'DEFAULT_TABLE_PER_PAGE', 'd': 25, 'c': int},
{'n': 'DISABLE_INDIVIDUAL_MINIFIGURES', 'c': bool},
{'n': 'DISABLE_INDIVIDUAL_PARTS', 'c': bool},
{'n': 'DISABLE_QUICK_ADD_INDIVIDUAL_PARTS', 'c': bool},
{'n': 'HIDE_QUICK_ADD_INDIVIDUAL_PARTS', 'c': bool},
{'n': 'DOMAIN_NAME', 'e': 'DOMAIN_NAME', 'd': ''},
{'n': 'FILE_DATETIME_FORMAT', 'd': '%d/%m/%Y, %H:%M:%S'},
{'n': 'HOST', 'd': '0.0.0.0'},
{'n': 'INDEPENDENT_ACCORDIONS', 'c': bool},
{'n': 'INSTRUCTIONS_ALLOWED_EXTENSIONS', 'd': ['.pdf'], 'c': list}, # noqa: E501
{'n': 'INSTRUCTIONS_FOLDER', 'd': 'instructions', 's': True},
{'n': 'INSTRUCTIONS_FOLDER', 'd': 'data/instructions'},
{'n': 'HIDE_ADD_SET', 'c': bool},
{'n': 'HIDE_ADD_BULK_SET', 'c': bool},
{'n': 'HIDE_ADMIN', 'c': bool},
{'n': 'ADMIN_DEFAULT_EXPANDED_SECTIONS', 'd': ['database'], 'c': list},
{'n': 'HIDE_ALL_INSTRUCTIONS', 'c': bool},
{'n': 'HIDE_ALL_MINIFIGURES', 'c': bool},
{'n': 'HIDE_INDIVIDUAL_MINIFIGURES', 'c': bool},
{'n': 'HIDE_ALL_PARTS', 'c': bool},
{'n': 'HIDE_INDIVIDUAL_PARTS', 'c': bool},
{'n': 'HIDE_ALL_PROBLEMS_PARTS', 'e': 'BK_HIDE_MISSING_PARTS', 'c': bool},
{'n': 'HIDE_ALL_SETS', 'c': bool},
{'n': 'HIDE_ALL_STORAGES', 'c': bool},
{'n': 'HIDE_STATISTICS', 'c': bool},
{'n': 'HIDE_SET_INSTRUCTIONS', 'c': bool},
{'n': 'HIDE_TABLE_DAMAGED_PARTS', 'c': bool},
{'n': 'HIDE_TABLE_MISSING_PARTS', 'c': bool},
{'n': 'HIDE_TABLE_CHECKED_PARTS', 'c': bool},
{'n': 'HIDE_WISHES', 'c': bool},
{'n': 'MINIFIGURES_DEFAULT_ORDER', 'd': '"rebrickable_minifigures"."name" ASC'}, # noqa: E501
{'n': 'MINIFIGURES_FOLDER', 'd': 'minifigs', 's': True},
{'n': 'MINIFIGURES_FOLDER', 'd': 'data/minifigures'},
{'n': 'MINIFIGURES_PAGINATION_SIZE_DESKTOP', 'd': 10, 'c': int},
{'n': 'MINIFIGURES_PAGINATION_SIZE_MOBILE', 'd': 5, 'c': int},
{'n': 'MINIFIGURES_SERVER_SIDE_PAGINATION', 'c': bool},
{'n': 'NO_THREADED_SOCKET', 'c': bool},
{'n': 'PARTS_DEFAULT_ORDER', 'd': '"rebrickable_parts"."name" ASC, "rebrickable_parts"."color_name" ASC, "bricktracker_parts"."spare" ASC'}, # noqa: E501
{'n': 'PARTS_FOLDER', 'd': 'parts', 's': True},
{'n': 'PARTS_SERVER_SIDE_PAGINATION', 'c': bool},
{'n': 'SETS_SERVER_SIDE_PAGINATION', 'c': bool},
{'n': 'PARTS_DEFAULT_ORDER', 'd': '"rebrickable_parts"."name" ASC, "rebrickable_parts"."color_name" ASC, "combined"."spare" ASC'}, # noqa: E501
{'n': 'PARTS_FOLDER', 'd': 'data/parts'},
{'n': 'PARTS_PAGINATION_SIZE_DESKTOP', 'd': 10, 'c': int},
{'n': 'PARTS_PAGINATION_SIZE_MOBILE', 'd': 5, 'c': int},
{'n': 'PROBLEMS_PAGINATION_SIZE_DESKTOP', 'd': 10, 'c': int},
{'n': 'PROBLEMS_PAGINATION_SIZE_MOBILE', 'd': 10, 'c': int},
{'n': 'PROBLEMS_SERVER_SIDE_PAGINATION', 'c': bool},
{'n': 'SETS_PAGINATION_SIZE_DESKTOP', 'd': 12, 'c': int},
{'n': 'SETS_PAGINATION_SIZE_MOBILE', 'd': 4, 'c': int},
{'n': 'PORT', 'd': 3333, 'c': int},
{'n': 'PURCHASE_DATE_FORMAT', 'd': '%d/%m/%Y'},
{'n': 'PURCHASE_CURRENCY', 'd': ''},
@@ -52,21 +76,37 @@ CONFIG: Final[list[dict[str, Any]]] = [
{'n': 'REBRICKABLE_LINK_PART_PATTERN', 'd': 'https://rebrickable.com/parts/{part}/_/{color}'}, # noqa: E501
{'n': 'REBRICKABLE_LINK_INSTRUCTIONS_PATTERN', 'd': 'https://rebrickable.com/instructions/{path}'}, # noqa: E501
{'n': 'REBRICKABLE_USER_AGENT', 'd': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'}, # noqa: E501
{'n': 'USER_AGENT', 'd': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'}, # noqa: E501
{'n': 'PEERON_DOWNLOAD_DELAY', 'd': 1000, 'c': int},
{'n': 'PEERON_INSTRUCTION_PATTERN', 'd': 'http://peeron.com/scans/{set_number}-{version_number}'},
{'n': 'PEERON_MIN_IMAGE_SIZE', 'd': 100, 'c': int},
{'n': 'PEERON_SCAN_PATTERN', 'd': 'http://belay.peeron.com/scans/{set_number}-{version_number}/'},
{'n': 'PEERON_THUMBNAIL_PATTERN', 'd': 'http://belay.peeron.com/thumbs/{set_number}-{version_number}/'},
{'n': 'REBRICKABLE_LINKS', 'e': 'LINKS', 'c': bool},
{'n': 'REBRICKABLE_PAGE_SIZE', 'd': 100, 'c': int},
{'n': 'RETIRED_SETS_FILE_URL', 'd': 'https://docs.google.com/spreadsheets/d/1rlYfEXtNKxUOZt2Mfv0H17DvK7bj6Pe0CuYwq6ay8WA/gviz/tq?tqx=out:csv&sheet=Sorted%20by%20Retirement%20Date'}, # noqa: E501
{'n': 'RETIRED_SETS_PATH', 'd': './retired_sets.csv'},
{'n': 'RETIRED_SETS_PATH', 'd': 'data/retired_sets.csv'},
{'n': 'SETS_DEFAULT_ORDER', 'd': '"rebrickable_sets"."number" DESC, "rebrickable_sets"."version" ASC'}, # noqa: E501
{'n': 'SETS_FOLDER', 'd': 'sets', 's': True},
{'n': 'SETS_FOLDER', 'd': 'data/sets'},
{'n': 'SETS_CONSOLIDATION', 'd': False, 'c': bool},
{'n': 'SHOW_GRID_FILTERS', 'c': bool},
{'n': 'SHOW_GRID_SORT', 'c': bool},
{'n': 'SHOW_SETS_DUPLICATE_FILTER', 'd': True, 'c': bool},
{'n': 'SKIP_SPARE_PARTS', 'c': bool},
{'n': 'HIDE_SPARE_PARTS', 'c': bool},
{'n': 'SOCKET_NAMESPACE', 'd': 'bricksocket'},
{'n': 'SOCKET_PATH', 'd': '/bricksocket/'},
{'n': 'STORAGE_DEFAULT_ORDER', 'd': '"bricktracker_metadata_storages"."name" ASC'}, # noqa: E501
{'n': 'THEMES_FILE_URL', 'd': 'https://cdn.rebrickable.com/media/downloads/themes.csv.gz'}, # noqa: E501
{'n': 'THEMES_PATH', 'd': './themes.csv'},
{'n': 'THEMES_PATH', 'd': 'data/themes.csv'},
{'n': 'TIMEZONE', 'd': 'Etc/UTC'},
{'n': 'USE_REMOTE_IMAGES', 'c': bool},
{'n': 'WISHES_DEFAULT_ORDER', 'd': '"bricktracker_wishes"."rowid" DESC'},
{'n': 'STATISTICS_SHOW_CHARTS', 'd': True, 'c': bool},
{'n': 'STATISTICS_DEFAULT_EXPANDED', 'd': True, 'c': bool},
{'n': 'DARK_MODE', 'c': bool},
{'n': 'BADGE_ORDER_GRID', 'd': ['theme', 'year', 'parts', 'total_minifigures', 'owner'], 'c': list},
{'n': 'BADGE_ORDER_DETAIL', 'd': ['theme', 'tag', 'year', 'parts', 'instance_count', 'total_minifigures', 'total_missing', 'total_damaged', 'owner', 'storage', 'purchase_date', 'purchase_location', 'purchase_price', 'instructions', 'rebrickable', 'bricklink'], 'c': list},
{'n': 'SHOW_NOTES_GRID', 'd': False, 'c': bool},
{'n': 'SHOW_NOTES_DETAIL', 'd': True, 'c': bool},
]
+343
View File
@@ -0,0 +1,343 @@
import os
import logging
from typing import Any, Dict, Final, List, Optional
from pathlib import Path
from flask import current_app
logger = logging.getLogger(__name__)
# Environment variables that can be changed live without restart
LIVE_CHANGEABLE_VARS: Final[List[str]] = [
'BK_BRICKLINK_LINKS',
'BK_DEFAULT_TABLE_PER_PAGE',
'BK_INDEPENDENT_ACCORDIONS',
'BK_HIDE_ADD_SET',
'BK_HIDE_ADD_BULK_SET',
'BK_HIDE_ADMIN',
'BK_ADMIN_DEFAULT_EXPANDED_SECTIONS',
'BK_HIDE_ALL_INSTRUCTIONS',
'BK_HIDE_ALL_MINIFIGURES',
'BK_HIDE_INDIVIDUAL_MINIFIGURES',
'BK_HIDE_ALL_PARTS',
'BK_HIDE_INDIVIDUAL_PARTS',
'BK_HIDE_ALL_PROBLEMS_PARTS',
'BK_HIDE_ALL_SETS',
'BK_HIDE_ALL_STORAGES',
'BK_HIDE_STATISTICS',
'BK_HIDE_SET_INSTRUCTIONS',
'BK_HIDE_TABLE_DAMAGED_PARTS',
'BK_HIDE_TABLE_MISSING_PARTS',
'BK_HIDE_TABLE_CHECKED_PARTS',
'BK_DISABLE_QUICK_ADD_INDIVIDUAL_PARTS',
'BK_HIDE_WISHES',
'BK_MINIFIGURES_PAGINATION_SIZE_DESKTOP',
'BK_MINIFIGURES_PAGINATION_SIZE_MOBILE',
'BK_MINIFIGURES_SERVER_SIDE_PAGINATION',
'BK_PARTS_PAGINATION_SIZE_DESKTOP',
'BK_PARTS_PAGINATION_SIZE_MOBILE',
'BK_PARTS_SERVER_SIDE_PAGINATION',
'BK_SETS_SERVER_SIDE_PAGINATION',
'BK_PROBLEMS_PAGINATION_SIZE_DESKTOP',
'BK_PROBLEMS_PAGINATION_SIZE_MOBILE',
'BK_PROBLEMS_SERVER_SIDE_PAGINATION',
'BK_SETS_PAGINATION_SIZE_DESKTOP',
'BK_SETS_PAGINATION_SIZE_MOBILE',
'BK_SETS_CONSOLIDATION',
'BK_RANDOM',
'BK_REBRICKABLE_LINKS',
'BK_SHOW_GRID_FILTERS',
'BK_SHOW_GRID_SORT',
'BK_SHOW_SETS_DUPLICATE_FILTER',
'BK_SKIP_SPARE_PARTS',
'BK_HIDE_SPARE_PARTS',
'BK_USE_REMOTE_IMAGES',
'BK_PEERON_DOWNLOAD_DELAY',
'BK_PEERON_MIN_IMAGE_SIZE',
'BK_REBRICKABLE_PAGE_SIZE',
'BK_STATISTICS_SHOW_CHARTS',
'BK_STATISTICS_DEFAULT_EXPANDED',
'BK_DARK_MODE',
# Badge order preferences
'BK_BADGE_ORDER_GRID',
'BK_BADGE_ORDER_DETAIL',
'BK_SHOW_NOTES_GRID',
'BK_SHOW_NOTES_DETAIL',
# Default ordering and formatting
'BK_INSTRUCTIONS_ALLOWED_EXTENSIONS',
'BK_MINIFIGURES_DEFAULT_ORDER',
'BK_PARTS_DEFAULT_ORDER',
'BK_SETS_DEFAULT_ORDER',
'BK_PURCHASE_LOCATION_DEFAULT_ORDER',
'BK_STORAGE_DEFAULT_ORDER',
'BK_WISHES_DEFAULT_ORDER',
# URL and Pattern Variables
# BrickLink minifigure links disabled - no ID mapping available
# 'BK_BRICKLINK_LINK_MINIFIGURE_PATTERN',
'BK_BRICKLINK_LINK_PART_PATTERN',
'BK_BRICKLINK_LINK_SET_PATTERN',
'BK_REBRICKABLE_IMAGE_NIL',
'BK_REBRICKABLE_IMAGE_NIL_MINIFIGURE',
'BK_REBRICKABLE_LINK_MINIFIGURE_PATTERN',
'BK_REBRICKABLE_LINK_PART_PATTERN',
'BK_REBRICKABLE_LINK_INSTRUCTIONS_PATTERN',
'BK_PEERON_INSTRUCTION_PATTERN',
'BK_PEERON_SCAN_PATTERN',
'BK_PEERON_THUMBNAIL_PATTERN',
'BK_RETIRED_SETS_FILE_URL',
'BK_RETIRED_SETS_PATH',
'BK_THEMES_FILE_URL',
'BK_THEMES_PATH'
]
# Environment variables that require restart
RESTART_REQUIRED_VARS: Final[List[str]] = [
'BK_AUTHENTICATION_PASSWORD',
'BK_AUTHENTICATION_KEY',
'BK_DATABASE_PATH',
'BK_DEBUG',
'BK_DISABLE_INDIVIDUAL_PARTS',
'BK_DISABLE_INDIVIDUAL_MINIFIGURES',
'BK_DOMAIN_NAME',
'BK_HOST',
'BK_PORT',
'BK_SOCKET_NAMESPACE',
'BK_SOCKET_PATH',
'BK_NO_THREADED_SOCKET',
'BK_TIMEZONE',
'BK_REBRICKABLE_API_KEY',
'BK_INSTRUCTIONS_FOLDER',
'BK_PARTS_FOLDER',
'BK_SETS_FOLDER',
'BK_MINIFIGURES_FOLDER',
'BK_DATABASE_TIMESTAMP_FORMAT',
'BK_FILE_DATETIME_FORMAT',
'BK_PURCHASE_DATE_FORMAT',
'BK_PURCHASE_CURRENCY',
'BK_REBRICKABLE_USER_AGENT',
'BK_USER_AGENT'
]
class ConfigManager:
"""Manages live configuration updates for BrickTracker"""
def __init__(self):
# Check for .env in data folder first (v1.3+), fallback to root (backward compatibility)
data_env = Path('data/.env')
root_env = Path('.env')
if data_env.exists():
self.env_file_path = data_env
logger.info("Using configuration file: data/.env")
elif root_env.exists():
self.env_file_path = root_env
logger.info("Using configuration file: .env (consider migrating to data/.env)")
else:
# Default to data/.env for new installations
self.env_file_path = data_env
logger.info("Configuration file will be created at: data/.env")
def get_current_config(self) -> Dict[str, Any]:
"""Get current configuration values for live-changeable variables"""
config = {}
for var in LIVE_CHANGEABLE_VARS:
# Get internal config name
internal_name = var.replace('BK_', '')
# Get current value from Flask config
if internal_name in current_app.config:
config[var] = current_app.config[internal_name]
else:
# Fallback to environment variable
config[var] = os.environ.get(var, '')
return config
def get_restart_required_config(self) -> Dict[str, Any]:
"""Get current configuration values for restart-required variables"""
config = {}
for var in RESTART_REQUIRED_VARS:
# Get internal config name
internal_name = var.replace('BK_', '')
# Get current value from Flask config
if internal_name in current_app.config:
config[var] = current_app.config[internal_name]
else:
# Fallback to environment variable
config[var] = os.environ.get(var, '')
return config
def update_config(self, updates: Dict[str, Any]) -> Dict[str, str]:
"""Update configuration values. Returns dict with status for each update"""
results = {}
for var_name, new_value in updates.items():
if var_name not in LIVE_CHANGEABLE_VARS:
results[var_name] = f"Error: {var_name} requires restart to change"
continue
try:
# Update environment variable
os.environ[var_name] = str(new_value)
# Update Flask config
internal_name = var_name.replace('BK_', '')
cast_value = self._cast_value(var_name, new_value)
current_app.config[internal_name] = cast_value
# Update .env file
self._update_env_file(var_name, new_value)
results[var_name] = "Updated successfully"
if current_app.debug:
logger.info(f"Config updated: {var_name}={new_value}")
except Exception as e:
results[var_name] = f"Error: {str(e)}"
logger.error(f"Failed to update {var_name}: {e}")
return results
def _cast_value(self, var_name: str, value: Any) -> Any:
"""Cast value to appropriate type based on variable name"""
# List variables (admin sections, badge order) - Check this FIRST before boolean check
if any(keyword in var_name.lower() for keyword in ['sections', 'badge_order', 'allowed_extensions']):
if isinstance(value, str):
return [section.strip() for section in value.split(',') if section.strip()]
elif isinstance(value, list):
return value
else:
return []
# Integer variables (pagination sizes, delays, etc.) - Check BEFORE boolean check
if any(keyword in var_name.lower() for keyword in ['_size', '_page', 'delay', 'min_', 'per_page', 'page_size']):
try:
return int(value)
except (ValueError, TypeError):
return 0
# Boolean variables - More specific patterns to avoid conflicts
if any(keyword in var_name.lower() for keyword in ['hide_', 'disable_', 'server_side_pagination', '_links', 'random', 'skip_', 'show_', 'use_', '_consolidation', '_charts', '_expanded']):
if isinstance(value, str):
return value.lower() in ('true', '1', 'yes', 'on')
return bool(value)
# String variables (default)
return str(value)
def _format_env_value(self, value: Any) -> str:
"""Format value for .env file storage"""
if isinstance(value, bool):
return 'true' if value else 'false'
elif isinstance(value, (int, float)):
return str(value)
elif isinstance(value, list):
return ','.join(str(item) for item in value)
elif value is None:
return ''
else:
return str(value)
def _update_env_file(self, var_name: str, value: Any) -> None:
"""Update the .env file with new value"""
if not self.env_file_path.exists():
# Ensure parent directory exists
self.env_file_path.parent.mkdir(parents=True, exist_ok=True)
self.env_file_path.touch()
# Read current .env content
lines = []
if self.env_file_path.exists():
with open(self.env_file_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
# Format value for .env file
env_value = self._format_env_value(value)
# Find and update the line, or add new line
updated = False
# First pass: Look for existing active variable
for i, line in enumerate(lines):
if line.strip().startswith(f"{var_name}="):
lines[i] = f"{var_name}={env_value}\n"
updated = True
break
# Second pass: If not found, look for commented-out variable
if not updated:
for i, line in enumerate(lines):
stripped = line.strip()
# Check for commented-out variable: # BK_VAR= or #BK_VAR=
if stripped.startswith('#') and var_name in stripped:
# Extract the part after #, handling optional space
comment_content = stripped[1:].strip()
if comment_content.startswith(f"{var_name}=") or comment_content.startswith(f"{var_name} ="):
# Uncomment and set new value, preserving any leading whitespace from original line
leading_whitespace = line[:len(line) - len(line.lstrip())]
lines[i] = f"{leading_whitespace}{var_name}={env_value}\n"
updated = True
logger.info(f"Uncommented and updated {var_name} in .env file")
break
# Third pass: If still not found, append to end
if not updated:
lines.append(f"{var_name}={env_value}\n")
logger.info(f"Added new {var_name} to end of .env file")
# Write back to file
with open(self.env_file_path, 'w', encoding='utf-8') as f:
f.writelines(lines)
def validate_config(self) -> Dict[str, Any]:
"""Validate current configuration"""
issues = []
warnings = []
# Check if critical variables are set
if not os.environ.get('BK_REBRICKABLE_API_KEY'):
warnings.append("BK_REBRICKABLE_API_KEY not set - some features may not work")
# Check for conflicting settings
if (os.environ.get('BK_PARTS_SERVER_SIDE_PAGINATION', '').lower() == 'false' and
int(os.environ.get('BK_PARTS_PAGINATION_SIZE_DESKTOP', '10')) > 100):
warnings.append("Large pagination size with client-side pagination may cause performance issues")
# Check pagination sizes are reasonable
for var in ['BK_SETS_PAGINATION_SIZE_DESKTOP', 'BK_PARTS_PAGINATION_SIZE_DESKTOP', 'BK_MINIFIGURES_PAGINATION_SIZE_DESKTOP']:
try:
size = int(os.environ.get(var, '10'))
if size < 1:
issues.append(f"{var} must be at least 1")
elif size > 1000:
warnings.append(f"{var} is very large ({size}) - may cause performance issues")
except ValueError:
issues.append(f"{var} must be a valid integer")
return {
'issues': issues,
'warnings': warnings,
'status': 'valid' if not issues else 'has_issues'
}
def get_variable_help(self, var_name: str) -> str:
"""Get help text for a configuration variable"""
help_text = {
'BK_BRICKLINK_LINKS': 'Show BrickLink links throughout the application',
'BK_DEFAULT_TABLE_PER_PAGE': 'Default number of items per page in tables',
'BK_INDEPENDENT_ACCORDIONS': 'Make accordion sections independent (can open multiple)',
'BK_HIDE_ADD_SET': 'Hide the "Add Set" menu entry',
'BK_HIDE_ADD_BULK_SET': 'Hide the "Add Bulk Set" menu entry',
'BK_HIDE_ADMIN': 'Hide the "Admin" menu entry',
'BK_ADMIN_DEFAULT_EXPANDED_SECTIONS': 'Admin sections to expand by default (comma-separated)',
'BK_HIDE_ALL_INSTRUCTIONS': 'Hide the "Instructions" menu entry',
'BK_HIDE_ALL_MINIFIGURES': 'Hide the "Minifigures" menu entry',
'BK_HIDE_ALL_PARTS': 'Hide the "Parts" menu entry',
'BK_HIDE_ALL_PROBLEMS_PARTS': 'Hide the "Problems" menu entry',
'BK_HIDE_ALL_SETS': 'Hide the "Sets" menu entry',
'BK_HIDE_ALL_STORAGES': 'Hide the "Storages" menu entry',
'BK_HIDE_STATISTICS': 'Hide the "Statistics" menu entry',
'BK_HIDE_SET_INSTRUCTIONS': 'Hide instructions section in set details',
'BK_HIDE_TABLE_DAMAGED_PARTS': 'Hide the "Damaged" column in parts tables',
'BK_HIDE_TABLE_MISSING_PARTS': 'Hide the "Missing" column in parts tables',
'BK_HIDE_TABLE_CHECKED_PARTS': 'Hide the "Checked" column in parts tables',
'BK_HIDE_WISHES': 'Hide the "Wishes" menu entry',
'BK_SETS_CONSOLIDATION': 'Enable set consolidation/grouping functionality',
'BK_SHOW_GRID_FILTERS': 'Show filter options on grids by default',
'BK_SHOW_GRID_SORT': 'Show sort options on grids by default',
'BK_SKIP_SPARE_PARTS': 'Skip importing spare parts when downloading sets from Rebrickable',
'BK_HIDE_SPARE_PARTS': 'Hide spare parts from parts lists (spare parts must still be in database)',
'BK_USE_REMOTE_IMAGES': 'Use remote images from Rebrickable CDN instead of local',
'BK_STATISTICS_SHOW_CHARTS': 'Show collection growth charts on statistics page',
'BK_STATISTICS_DEFAULT_EXPANDED': 'Expand all statistics sections by default',
'BK_DARK_MODE': 'Enable dark mode theme'
}
return help_text.get(var_name, 'No help available for this variable')
+5 -1
View File
@@ -60,7 +60,7 @@ class BrickConfiguration(object):
if self.cast == bool and isinstance(value, str):
value = value.lower() in ('true', 'yes', '1')
# Static path fixup
# Static path fixup (legacy - only for paths with s: True flag)
if self.static_path and isinstance(value, str):
value = os.path.normpath(value)
@@ -70,6 +70,10 @@ class BrickConfiguration(object):
# Remove static prefix
value = value.removeprefix('static/')
# Normalize regular paths (not marked as static)
elif not self.static_path and isinstance(value, str) and ('FOLDER' in self.name or 'PATH' in self.name):
value = os.path.normpath(value)
# Type casting
if self.cast is not None:
self.value = self.cast(value)
+566
View File
@@ -0,0 +1,566 @@
import logging
import traceback
from datetime import datetime
from typing import Any, Self, TYPE_CHECKING
from uuid import uuid4
from flask import current_app, url_for
from .exceptions import NotFoundException, DatabaseException, ErrorException
from .parser import parse_minifig
from .rebrickable import Rebrickable
from .rebrickable_minifigure import RebrickableMinifigure
from .set_owner_list import BrickSetOwnerList
from .set_purchase_location_list import BrickSetPurchaseLocationList
from .set_storage_list import BrickSetStorageList
from .set_tag_list import BrickSetTagList
from .sql import BrickSQL
if TYPE_CHECKING:
from .socket import BrickSocket
logger = logging.getLogger(__name__)
# Individual minifigure (not associated with a set)
class IndividualMinifigure(RebrickableMinifigure):
# Queries
select_query: str = 'individual_minifigure/select/by_id'
insert_query: str = 'individual_minifigure/insert'
# Delete an individual minifigure
def delete(self, /) -> None:
BrickSQL().executescript(
'individual_minifigure/delete',
id=self.fields.id
)
# Import an individual minifigure into the database
def download(self, socket: 'BrickSocket', data: dict[str, Any], /) -> bool:
# Load the minifigure
if not self.load(socket, data, from_download=True):
return False
try:
# Insert into the database
socket.auto_progress(
message='Minifigure {figure}: inserting into database'.format(
figure=self.fields.figure
),
increment_total=True,
)
# Generate an UUID for self
self.fields.id = str(uuid4())
# Save the storage
storage = BrickSetStorageList.get(
data.get('storage', ''),
allow_none=True
)
self.fields.storage = storage.fields.id if storage else None
# Save the purchase location
purchase_location = BrickSetPurchaseLocationList.get(
data.get('purchase_location', ''),
allow_none=True
)
self.fields.purchase_location = purchase_location.fields.id if purchase_location else None
# Save purchase date and price
purchase_date = data.get('purchase_date', None)
if purchase_date == '':
purchase_date = None
if purchase_date is not None:
try:
purchase_date = datetime.strptime(purchase_date, '%Y/%m/%d').timestamp()
except Exception:
purchase_date = None
self.fields.purchase_date = purchase_date
purchase_price = data.get('purchase_price', None)
if purchase_price == '':
purchase_price = None
if purchase_price is not None:
try:
purchase_price = float(purchase_price)
except Exception:
purchase_price = None
self.fields.purchase_price = purchase_price
# Save quantity and description
self.fields.quantity = int(data.get('quantity', 1))
self.fields.description = data.get('description', '')
# IMPORTANT: Insert rebrickable minifigure FIRST
# bricktracker_individual_minifigures has FK to rebrickable_minifigures
self.insert_rebrickable_loose()
# Now insert into bricktracker_individual_minifigures
# Use no_defer=True to ensure the insert happens before we insert parts
# (parts have a foreign key constraint on this id)
self.insert(commit=False, no_defer=True)
# Save the owners
owners: list[str] = list(data.get('owners', []))
for id in owners:
owner = BrickSetOwnerList.get(id)
owner.update_individual_minifigure_state(self, state=True)
# Save the tags
tags: list[str] = list(data.get('tags', []))
for id in tags:
tag = BrickSetTagList.get(id)
tag.update_individual_minifigure_state(self, state=True)
# Load the parts (elements) for this minifigure
if not self.download_parts(socket):
return False
# Commit the transaction to the database
socket.auto_progress(
message='Minifigure {figure}: writing to the database'.format(
figure=self.fields.figure
),
increment_total=True,
)
BrickSQL().commit()
# Info
logger.info('Minifigure {figure}: imported (id: {id})'.format(
figure=self.fields.figure,
id=self.fields.id,
))
# Complete
socket.complete(
message='Minifigure {figure}: imported (<a href="{url}">Go to the minifigure</a>)'.format(
figure=self.fields.figure,
url=self.url()
),
download=True
)
except Exception as e:
socket.fail(
message='Error while importing minifigure {figure}: {error}'.format(
figure=self.fields.figure,
error=e,
)
)
logger.debug(traceback.format_exc())
return False
return True
# Download parts (elements) for this individual minifigure
def download_parts(self, socket: 'BrickSocket', /) -> bool:
try:
# Check if we have cached parts data from load()
if hasattr(self, '_cached_parts_response'):
response = self._cached_parts_response
logger.debug('Using cached parts data from load()')
else:
# Need to fetch parts data
socket.auto_progress(
message='Minifigure {figure}: loading parts from Rebrickable'.format(
figure=self.fields.figure
),
increment_total=True,
)
logger.debug('rebrick.lego.get_minifig_elements("{figure}")'.format(
figure=self.fields.figure,
))
# Load parts data from Rebrickable API
import json
from rebrick import lego
parameters = {
'api_key': current_app.config['REBRICKABLE_API_KEY'],
'page_size': current_app.config['REBRICKABLE_PAGE_SIZE'],
}
response = json.loads(lego.get_minifig_elements(
self.fields.figure,
**parameters
).read())
socket.auto_progress(
message='Minifigure {figure}: saving parts to database'.format(
figure=self.fields.figure
),
)
# Insert each part into individual_minifigure_parts table
from .rebrickable_part import RebrickablePart
if 'results' in response:
logger.debug('Processing {count} parts for minifigure {figure}'.format(
count=len(response["results"]),
figure=self.fields.figure
))
for idx, result in enumerate(response['results']):
part_num = result['part']['part_num']
color_id = result['color']['id']
logger.debug(
'Part {current}/{total}: {part_num} (color: {color_id}, quantity: {quantity})'.format(
current=idx+1,
total=len(response["results"]),
part_num=part_num,
color_id=color_id,
quantity=result["quantity"]
)
)
# Insert rebrickable part data first
part_data = RebrickablePart.from_rebrickable(result)
logger.debug('Rebrickable part data keys: {keys}'.format(
keys=list(part_data.keys())
))
# Insert into rebrickable_parts if not exists
BrickSQL().execute(
'rebrickable/part/insert',
parameters=part_data,
commit=False,
)
# Download part image if not using remote images
if not current_app.config['USE_REMOTE_IMAGES']:
# Create a RebrickablePart instance for image download
from .set import BrickSet
try:
part_instance = RebrickablePart(record=part_data)
from .rebrickable_image import RebrickableImage
RebrickableImage(
BrickSet(), # Dummy set
minifigure=self,
part=part_instance,
).download()
except Exception as e:
logger.warning(
'Could not download image for part {part_num}: {error}'.format(
part_num=part_num,
error=e
)
)
# Insert into bricktracker_individual_minifigure_parts
individual_part_params = {
'id': self.fields.id,
'part': part_num,
'color': color_id,
'spare': result.get('is_spare', False),
'quantity': result['quantity'],
'element': result.get('element_id'),
'rebrickable_inventory': result['id'],
}
logger.debug('Individual part params: {params}'.format(
params=individual_part_params
))
BrickSQL().execute(
'individual_minifigure/part/insert',
parameters=individual_part_params,
commit=False,
)
logger.debug('Successfully inserted all {count} parts'.format(
count=len(response["results"])
))
else:
logger.warning('No results in parts response for minifigure {figure}'.format(
figure=self.fields.figure
))
# Clean up cached data
if hasattr(self, '_cached_parts_response'):
delattr(self, '_cached_parts_response')
return True
except Exception as e:
socket.fail(
message='Error loading parts for minifigure {figure}: {error}'.format(
figure=self.fields.figure,
error=e,
)
)
logger.debug(traceback.format_exc())
return False
# Insert the individual minifigure from Rebrickable
def insert_rebrickable_loose(self, /) -> None:
# Insert the Rebrickable minifigure to the database
# Note: We override the parent's insert_rebrickable since we don't have a brickset
from .rebrickable_image import RebrickableImage
# Explicitly build parameters for rebrickable_minifigures insert
params = {
'figure': self.fields.figure,
'number': self.fields.number,
'name': self.fields.name,
'image': self.fields.image,
'number_of_parts': self.fields.number_of_parts,
}
BrickSQL().execute(
RebrickableMinifigure.insert_query,
parameters=params,
commit=False,
)
# Download image locally if not using remote images
if not current_app.config['USE_REMOTE_IMAGES']:
# Create a dummy BrickSet for RebrickableImage
# RebrickableImage checks minifigure first before set, so this works
from .set import BrickSet
try:
RebrickableImage(
BrickSet(), # Dummy set - not used since minifigure takes priority
minifigure=self,
).download()
logger.debug('Downloaded image for individual minifigure {figure}'.format(
figure=self.fields.figure
))
except Exception as e:
logger.warning(
'Could not download image for individual minifigure {figure}: {error}'.format(
figure=self.fields.figure,
error=e
)
)
# Load the minifigure from Rebrickable
def load(
self,
socket: 'BrickSocket',
data: dict[str, Any],
/,
*,
from_download=False,
) -> bool:
# Reset the progress
socket.progress_count = 0
socket.progress_total = 2
try:
# Check if individual minifigures are disabled
from flask import current_app
if current_app.config.get('DISABLE_INDIVIDUAL_MINIFIGURES', False):
raise ErrorException(
'Individual minifigures system is disabled. '
'Only set-based minifigures can be added.'
)
socket.auto_progress(message='Parsing minifigure number')
figure = parse_minifig(str(data['figure']))
socket.auto_progress(
message='Minifigure {figure}: loading from Rebrickable'.format(
figure=figure,
),
)
logger.debug('rebrick.lego.get_minifig_elements("{figure}")'.format(
figure=figure,
))
# Load from Rebrickable using get_minifig_elements
# This gives us both minifigure info and parts in one call
import json
from rebrick import lego
parameters = {
'api_key': current_app.config['REBRICKABLE_API_KEY'],
'page_size': current_app.config['REBRICKABLE_PAGE_SIZE'],
}
response = json.loads(lego.get_minifig_elements(
figure,
**parameters
).read())
# Extract minifigure info from the first part's metadata
if 'results' in response and len(response['results']) > 0:
first_part = response['results'][0]
# Build minifigure data from the response
self.fields.figure = first_part['set_num']
self.fields.number_of_parts = response['count']
# We need to fetch the proper name and image from get_minifig()
# This is a small additional call but gives us the proper minifigure data
try:
# get_minifig() only needs api_key, not page_size
minifig_params = {
'api_key': current_app.config['REBRICKABLE_API_KEY']
}
minifig_response = json.loads(lego.get_minifig(
figure,
**minifig_params
).read())
self.fields.name = minifig_response.get('name', "Minifigure {figure}".format(figure=figure))
# Use the minifig image from get_minifig() - this is the assembled minifig
self.fields.image = minifig_response.get('set_img_url')
# Extract number from figure (e.g., fig-005997 -> 5997)
try:
self.fields.number = int(figure.split('-')[1])
except:
self.fields.number = 0
except Exception as e:
logger.warning('Could not fetch minifigure name: {error}'.format(
error=e
))
self.fields.name = "Minifigure {figure}".format(figure=figure)
# Try to extract number anyway
try:
self.fields.number = int(figure.split('-')[1])
except:
self.fields.number = 0
# Fallback: try to extract image from first part with element_id
self.fields.image = None
for result in response['results']:
if result.get('element_id') and result['part'].get('part_img_url'):
self.fields.image = result['part']['part_img_url']
break
# Store the parts data for later use in download
self._cached_parts_response = response
else:
raise NotFoundException('Minifigure {figure} has no parts in Rebrickable'.format(
figure=figure
))
# Download minifigure image during preview if not using remote images
if not from_download and not current_app.config['USE_REMOTE_IMAGES'] and self.fields.image:
from .rebrickable_image import RebrickableImage
from .set import BrickSet
try:
RebrickableImage(
BrickSet(),
minifigure=self,
).download()
logger.debug('Downloaded preview image for minifigure {figure}'.format(
figure=self.fields.figure
))
except Exception as e:
logger.warning(
'Could not download preview image for minifigure {figure}: {error}'.format(
figure=self.fields.figure,
error=e
)
)
socket.emit('MINIFIGURE_LOADED', self.short(
from_download=from_download
))
if not from_download:
socket.complete(
message='Minifigure {figure}: loaded from Rebrickable'.format(
figure=self.fields.figure
)
)
return True
except Exception as e:
# Check if this is the "disabled" error - if so, show cleaner message
error_msg = str(e)
if 'Individual minifigures system is disabled' in error_msg:
socket.fail(message=error_msg)
else:
socket.fail(
message='Could not load the minifigure from Rebrickable: {error}. Data: {data}'.format(
error=error_msg,
data=data,
)
)
if not isinstance(e, (NotFoundException, ErrorException)):
logger.debug(traceback.format_exc())
return False
# Return a short form of the minifigure
def short(self, /, *, from_download: bool = False) -> dict[str, Any]:
return {
'download': from_download,
'image': self.url_for_image(),
'name': self.fields.name,
'figure': self.fields.figure,
}
# Select an individual minifigure by ID
def select_by_id(self, id: str, /) -> Self:
# Save the ID parameter
self.fields.id = id
# Import status list here to get metadata columns
from .set_status_list import BrickSetStatusList
# Pass metadata columns to the query (using set tables which now handle all entities)
context = {
'owners': BrickSetOwnerList.as_columns() if BrickSetOwnerList.list() else '',
'statuses': BrickSetStatusList.as_columns(all=True) if BrickSetStatusList.list(all=True) else '',
'tags': BrickSetTagList.as_columns() if BrickSetTagList.list() else '',
}
if not self.select(**context):
raise NotFoundException(
'Individual minifigure with ID {id} was not found in the database'.format(
id=id,
),
)
return self
# URL to this individual minifigure instance
def url(self, /) -> str:
return url_for('individual_minifigure.details', id=self.fields.id)
# String representation for debugging
def __repr__(self, /) -> str:
figure = getattr(self.fields, 'figure', 'unknown')
name = getattr(self.fields, 'name', 'Unknown')
qty = getattr(self.fields, 'quantity', 0)
return f'<IndividualMinifigure {figure} "{name}" qty:{qty}>'
# URL for updating quantity
def url_for_quantity(self, /) -> str:
return url_for('individual_minifigure.update_quantity', id=self.fields.id)
# URL for updating description
def url_for_description(self, /) -> str:
return url_for('individual_minifigure.update_description', id=self.fields.id)
# Parts
def generic_parts(self, /):
from .part_list import BrickPartList
return BrickPartList().from_individual_minifigure(self)
# Override from_rebrickable to handle minifigure data
@staticmethod
def from_rebrickable(data: dict[str, Any], /, **_) -> dict[str, Any]:
# Extracting number
number = int(str(data['set_num'])[5:])
return {
'figure': str(data['set_num']),
'number': int(number),
'name': str(data['set_name']),
'image': str(data['set_img_url']) if data.get('set_img_url') else None,
'number_of_parts': int(data.get('num_parts', 0)),
}
@@ -0,0 +1,98 @@
import logging
from typing import Self
from .individual_minifigure import IndividualMinifigure
from .record_list import BrickRecordList
from .set_owner_list import BrickSetOwnerList
from .set_status_list import BrickSetStatusList
from .set_tag_list import BrickSetTagList
logger = logging.getLogger(__name__)
# Individual minifigures list
class IndividualMinifigureList(BrickRecordList[IndividualMinifigure]):
# Queries
all_query: str = 'individual_minifigure/list/all'
instances_by_figure_query: str = 'individual_minifigure/select/instances_by_figure'
using_storage_query: str = 'individual_minifigure/list/using_storage'
using_purchase_location_query: str = 'individual_minifigure/list/using_purchase_location'
without_storage_query: str = 'individual_minifigure/list/without_storage'
def __init__(self, /):
super().__init__()
# Load all individual minifigures
def all(self, /) -> Self:
# Prepare context with metadata columns
context = {
'owners': BrickSetOwnerList.as_columns() if BrickSetOwnerList.list() else 'NULL AS "no_owners"',
'statuses': BrickSetStatusList.as_columns(all=True) if BrickSetStatusList.list(all=True) else 'NULL AS "no_statuses"',
'tags': BrickSetTagList.as_columns() if BrickSetTagList.list() else 'NULL AS "no_tags"',
}
self.list(override_query=self.all_query, **context)
return self
# Load all individual instances of a specific minifigure figure
def instances_by_figure(self, figure: str, /) -> Self:
self.fields.figure = figure
# Prepare context with metadata columns (using consolidated metadata tables)
context = {
'owners': BrickSetOwnerList.as_columns() if BrickSetOwnerList.list() else 'NULL AS "no_owners"',
'statuses': BrickSetStatusList.as_columns(all=True) if BrickSetStatusList.list(all=True) else 'NULL AS "no_statuses"',
'tags': BrickSetTagList.as_columns() if BrickSetTagList.list() else 'NULL AS "no_tags"',
}
# Load the instances from the database
self.list(override_query=self.instances_by_figure_query, **context)
return self
# Load all individual minifigures using a specific storage
def using_storage(self, storage: 'BrickSetStorage', /) -> Self:
# Save the storage parameter
self.fields.storage = storage.fields.id
# Load the minifigures from the database
self.list(override_query=self.using_storage_query)
return self
# Load all individual minifigures using a specific purchase location
def using_purchase_location(self, purchase_location: 'BrickSetPurchaseLocation', /) -> Self:
# Save the purchase location parameter
self.fields.purchase_location = purchase_location.fields.id
# Load the minifigures from the database
self.list(override_query=self.using_purchase_location_query)
return self
# Load all individual minifigures without storage
def without_storage(self, /) -> Self:
# Load minifigures with no storage
self.list(override_query=self.without_storage_query)
return self
# Base individual minifigure list
def list(
self,
/,
*,
override_query: str | None = None,
order: str | None = None,
limit: int | None = None,
**context,
) -> None:
# Load the individual minifigures from the database
for record in super().select(
override_query=override_query,
order=order,
limit=limit,
**context
):
individual_minifigure = IndividualMinifigure(record=record)
self.records.append(individual_minifigure)
+917
View File
@@ -0,0 +1,917 @@
import logging
import os
import traceback
from typing import Any, Self, TYPE_CHECKING
from urllib.parse import urlparse
from uuid import uuid4
from flask import current_app, url_for
import requests
from shutil import copyfileobj
from .exceptions import NotFoundException, DatabaseException, ErrorException
from .record import BrickRecord
from .set_owner_list import BrickSetOwnerList
from .set_purchase_location_list import BrickSetPurchaseLocationList
from .set_storage_list import BrickSetStorageList
from .set_tag_list import BrickSetTagList
from .sql import BrickSQL
if TYPE_CHECKING:
from .socket import BrickSocket
logger = logging.getLogger(__name__)
# Individual part (standalone, not associated with a set or minifigure)
class IndividualPart(BrickRecord):
# Queries
select_query: str = 'individual_part/select/by_id'
insert_query: str = 'individual_part/insert'
update_query: str = 'individual_part/update'
def __init__(
self,
/,
*,
record: Any | None = None
):
super().__init__()
# Ingest the record if it has one
if record is not None:
self.ingest(record)
# Select a specific individual part by UUID
def select_by_id(self, id: str, /) -> Self:
from .set_owner_list import BrickSetOwnerList
from .set_status_list import BrickSetStatusList
from .set_tag_list import BrickSetTagList
self.fields.id = id
if not self.select(
override_query=self.select_query,
owners=BrickSetOwnerList.as_columns(),
statuses=BrickSetStatusList.as_columns(all=True),
tags=BrickSetTagList.as_columns(),
):
raise NotFoundException(
'Individual part with id "{id}" not found'.format(id=id)
)
return self
# Delete an individual part
def delete(self, /) -> None:
sql = BrickSQL()
sql.executescript(
'individual_part/delete',
id=self.fields.id
)
sql.commit()
# Generate HTML ID for form elements
def html_id(self, prefix: str | None = None, /) -> str:
components: list[str] = ['individual-part']
if prefix is not None:
components.append(prefix)
components.append(self.fields.part)
components.append(str(self.fields.color))
components.append(self.fields.id)
return '-'.join(components)
# URL for quantity update
def url_for_quantity(self, /) -> str:
return url_for('individual_part.update_quantity', id=self.fields.id)
# URL for description update
def url_for_description(self, /) -> str:
return url_for('individual_part.update_description', id=self.fields.id)
# URL for problem (missing/damaged) update
def url_for_problem(self, problem_type: str, /) -> str:
if problem_type == 'missing':
return url_for('individual_part.update_missing', id=self.fields.id)
elif problem_type == 'damaged':
return url_for('individual_part.update_damaged', id=self.fields.id)
else:
raise ValueError(f'Invalid problem type: {problem_type}')
# URL for checked status update
def url_for_checked(self, /) -> str:
return url_for('individual_part.update_checked', id=self.fields.id)
# URL for purchase date update
def url_for_purchase_date(self, /) -> str:
return url_for('individual_part.update_purchase_date', id=self.fields.id)
# URL for purchase price update
def url_for_purchase_price(self, /) -> str:
return url_for('individual_part.update_purchase_price', id=self.fields.id)
# URL for this part's detail page
def url(self, /) -> str:
return url_for('individual_part.details', id=self.fields.id)
def url_for_delete(self, /) -> str:
return url_for('individual_part.delete_part', id=self.fields.id)
def url_for_image(self, /) -> str:
if current_app.config.get('USE_REMOTE_IMAGES', False):
if hasattr(self.fields, 'image') and self.fields.image:
return self.fields.image
else:
return current_app.config.get('REBRICKABLE_IMAGE_NIL', '')
else:
from .rebrickable_image import RebrickableImage
if hasattr(self.fields, 'image') and self.fields.image:
image_id, _ = os.path.splitext(os.path.basename(urlparse(self.fields.image).path))
if image_id:
return RebrickableImage.static_url(image_id, 'PARTS_FOLDER')
return RebrickableImage.static_url(RebrickableImage.nil_name(), 'PARTS_FOLDER')
# String representation for debugging
def __repr__(self, /) -> str:
"""String representation for debugging"""
part_id = getattr(self.fields, 'part', 'unknown')
color_id = getattr(self.fields, 'color', 'unknown')
qty = getattr(self.fields, 'quantity', 0)
return f'<IndividualPart {part_id} color:{color_id} qty:{qty}>'
# Get or fetch color information from rebrickable_colors table
@staticmethod
def get_or_fetch_color(color_id: int, /) -> dict[str, Any] | None:
sql = BrickSQL()
# Check if color exists in cache
result = sql.fetchone('rebrickable_colors/select/by_color_id', parameters={'color_id': color_id})
if result:
# Color found in cache
return {
'color_id': result[0],
'name': result[1],
'rgb': result[2],
'is_trans': result[3],
'bricklink_color_id': result[4],
'bricklink_color_name': result[5]
}
# Color not in cache, fetch from API
try:
import rebrick
import json
rebrick.init(current_app.config['REBRICKABLE_API_KEY'])
color_response = rebrick.lego.get_color(color_id)
color_data = json.loads(color_response.read())
# Extract BrickLink color info
bricklink_color_id = None
bricklink_color_name = None
if 'external_ids' in color_data and 'BrickLink' in color_data['external_ids']:
bricklink_info = color_data['external_ids']['BrickLink']
if 'ext_ids' in bricklink_info and bricklink_info['ext_ids']:
bricklink_color_id = bricklink_info['ext_ids'][0]
if 'ext_descrs' in bricklink_info and bricklink_info['ext_descrs']:
bricklink_color_name = bricklink_info['ext_descrs'][0][0] if bricklink_info['ext_descrs'][0] else None
# Store in cache
sql.execute('rebrickable_colors/insert', parameters={
'color_id': color_data['id'],
'name': color_data['name'],
'rgb': color_data.get('rgb'),
'is_trans': color_data.get('is_trans', False),
'bricklink_color_id': bricklink_color_id,
'bricklink_color_name': bricklink_color_name
})
sql.connection.commit()
logger.info('Cached color {color_id} ({color_name}) with BrickLink ID {bricklink_id}'.format(
color_id=color_id,
color_name=color_data["name"],
bricklink_id=bricklink_color_id
))
return {
'color_id': color_data['id'],
'name': color_data['name'],
'rgb': color_data.get('rgb'),
'is_trans': color_data.get('is_trans', False),
'bricklink_color_id': bricklink_color_id,
'bricklink_color_name': bricklink_color_name
}
except Exception as e:
logger.warning('Could not fetch color {color_id} from API: {error}'.format(
color_id=color_id,
error=e
))
return None
# Download image for this part
def download_image(self, image_url: str, /, *, image_filename: str | None = None) -> None:
if not image_url:
return
# Use provided filename or extract from URL
if image_filename:
image_id = image_filename
else:
image_id, _ = os.path.splitext(os.path.basename(urlparse(image_url).path))
if not image_id:
return
# Build path (same pattern as RebrickableImage)
parts_folder = current_app.config['PARTS_FOLDER']
extension = 'jpg' # Everything is saved as jpg
# If folder is an absolute path (starts with /), use it directly
# Otherwise, make it relative to app root (current_app.root_path)
if parts_folder.startswith('/'):
base_path = parts_folder
else:
base_path = os.path.join(current_app.root_path, parts_folder)
path = os.path.join(base_path, f'{image_id}.{extension}')
# Avoid downloading if file exists
if os.path.exists(path):
return
# Create directory if it doesn't exist
os.makedirs(os.path.dirname(path), exist_ok=True)
# Download the image
try:
response = requests.get(image_url, stream=True)
if response.ok:
with open(path, 'wb') as f:
copyfileobj(response.raw, f)
logger.info('Downloaded image to {path}'.format(path=path))
except Exception as e:
logger.warning('Could not download image from {url}: {error}'.format(
url=image_url,
error=e
))
# Load available colors for a part
def load_colors(self, socket: 'BrickSocket', data: dict[str, Any], /) -> bool:
# Check if individual parts are disabled
if current_app.config.get('DISABLE_INDIVIDUAL_PARTS', False):
socket.fail(message='Individual parts system is disabled.')
return False
try:
# Extract part number
part_num = str(data.get('part', '')).strip()
if not part_num:
raise ErrorException('Part number is required')
# Fetch available colors from Rebrickable
import rebrick
import json
rebrick.init(current_app.config['REBRICKABLE_API_KEY'])
# Setup progress tracking
socket.progress_count = 0
socket.progress_total = 2 # Fetch part info + fetch colors
try:
# Get part information for the name
socket.auto_progress(message='Fetching part information')
part_response = rebrick.lego.get_part(part_num)
part_data = json.loads(part_response.read())
part_name = part_data.get('name', part_num)
# Get all available colors for this part
socket.auto_progress(message='Fetching available colors')
colors_response = rebrick.lego.get_part_colors(part_num)
colors_data = json.loads(colors_response.read())
# Extract the results
colors = colors_data.get('results', [])
if not colors:
raise ErrorException(f'No colors found for part {part_num}')
# Download images locally if USE_REMOTE_IMAGES is False
if not current_app.config.get('USE_REMOTE_IMAGES', False):
# Add image downloads to progress
socket.progress_total += len(colors)
for color in colors:
image_url = color.get('part_img_url', '')
element_id = color.get('elements', [])
# Use first element_id if available, otherwise extract from URL
if element_id and len(element_id) > 0:
image_filename = str(element_id[0])
else:
# Fallback: extract from URL
image_filename = None
if image_url:
image_filename, _ = os.path.splitext(os.path.basename(urlparse(image_url).path))
if image_url and image_filename:
socket.auto_progress(message='Downloading image for {color}'.format(
color=color.get("color_name", "color")
))
try:
self.download_image(image_url, image_filename=image_filename)
except Exception as e:
logger.warning('Could not download image for part {part_num} color {color_id}: {error}'.format(
part_num=part_num,
color_id=color.get("color_id"),
error=e
))
# Emit the part colors loaded event
logger.info('Emitting {count} colors for part {part_num} ({part_name})'.format(
count=len(colors),
part_num=part_num,
part_name=part_name
))
socket.emit(
'PART_COLORS_LOADED',
{
'part': part_num,
'part_name': part_name,
'colors': colors,
'count': len(colors)
}
)
logger.info('Successfully loaded {count} colors for part {part_num}'.format(
count=len(colors),
part_num=part_num
))
return True
except Exception as e:
error_msg = str(e)
# Provide helpful error message for printed/decorated parts
if '404' in error_msg or 'Not Found' in error_msg:
# Check if this might be a printed part (has letters/pattern code)
base_part = ''.join(c for c in part_num if c.isdigit())
if base_part and base_part != part_num:
raise ErrorException(
'Part {part_num} not found in Rebrickable. This appears to be a printed/decorated part. '
'Try searching for the base part number: {base_part}'.format(
part_num=part_num,
base_part=base_part
)
)
else:
raise ErrorException(
'Part {part_num} not found in Rebrickable. '
'Please verify the part number is correct.'.format(
part_num=part_num
)
)
else:
raise ErrorException(
'Could not fetch colors for part {part_num}: {error}'.format(
part_num=part_num,
error=error_msg
)
)
except Exception as e:
error_msg = str(e)
socket.fail(message=f'Could not load part colors: {error_msg}')
if not isinstance(e, (NotFoundException, ErrorException)):
logger.debug(traceback.format_exc())
return False
# Add a new individual part
def add(self, socket: 'BrickSocket', data: dict[str, Any], /) -> bool:
# Check if individual parts are disabled
if current_app.config.get('DISABLE_INDIVIDUAL_PARTS', False):
socket.fail(message='Individual parts system is disabled.')
return False
try:
# Reset progress
socket.progress_count = 0
socket.progress_total = 3
socket.auto_progress(message='Validating part and color')
# Extract data
part_num = str(data.get('part', '')).strip()
color_id = int(data.get('color', -1))
quantity = int(data.get('quantity', 1))
if not part_num:
raise ErrorException('Part number is required')
if color_id < 0:
raise ErrorException('Valid color ID is required')
if quantity <= 0:
raise ErrorException('Quantity must be greater than 0')
# Check if color info was pre-loaded (from load_colors)
color_data = data.get('color_info', None)
part_name = data.get('part_name', None)
# Validate part+color exists in rebrickable_parts
# If not, fetch from Rebrickable or use pre-loaded data and insert
sql = BrickSQL()
result = sql.fetchone('rebrickable_parts/check_exists', parameters={'part': part_num, 'color_id': color_id})
exists = result[0] > 0
# Store image URL for downloading later
image_url = None
if not exists:
# Fetch full color information (with BrickLink mapping)
socket.auto_progress(message='Fetching color information')
full_color_info = IndividualPart.get_or_fetch_color(color_id)
# If we have pre-loaded color data, use it; otherwise fetch from Rebrickable
if color_data and part_name:
# Use pre-loaded data from get_part_colors() response
socket.auto_progress(message='Using cached part info')
image_url = color_data.get('part_img_url', '')
# Extract image_id from element_id or URL
element_ids = color_data.get('elements', [])
if element_ids and len(element_ids) > 0:
image_id = str(element_ids[0])
elif image_url:
image_id, _ = os.path.splitext(os.path.basename(urlparse(image_url).path))
else:
image_id = None
# Insert into rebrickable_parts using the pre-loaded data
sql.execute('rebrickable_parts/insert_with_preloaded_data', parameters={
'part': part_num,
'color_id': color_id,
'color_name': color_data.get('color_name', ''),
'color_rgb': full_color_info.get('rgb') if full_color_info else None,
'color_transparent': full_color_info.get('is_trans') if full_color_info else None,
'bricklink_color_id': full_color_info.get('bricklink_color_id') if full_color_info else None,
'bricklink_color_name': full_color_info.get('bricklink_color_name') if full_color_info else None,
'name': part_name,
'image': image_url,
'image_id': image_id,
'url': current_app.config['REBRICKABLE_LINK_PART_PATTERN'].format(part=part_num, color=color_id)
})
else:
# Fetch from Rebrickable (fallback for old workflow)
socket.auto_progress(message='Fetching part info from Rebrickable')
import rebrick
import json
# Initialize rebrick with API key
rebrick.init(current_app.config['REBRICKABLE_API_KEY'])
try:
# Get part information
part_info = json.loads(rebrick.lego.get_part(part_num).read())
# Get color information (this also caches it in rebrickable_colors)
# full_color_info already fetched above, but get again to be sure
if not full_color_info:
full_color_info = IndividualPart.get_or_fetch_color(color_id)
# Get part+color specific info (for the image and element_id)
part_color_info = json.loads(rebrick.lego.get_part_color(part_num, color_id).read())
# Get image URL
image_url = part_color_info.get('part_img_url', part_info.get('part_img_url', ''))
# Extract image_id from element_ids or URL
element_ids = part_color_info.get('elements', [])
if element_ids and len(element_ids) > 0:
image_id = str(element_ids[0])
elif image_url:
image_id, _ = os.path.splitext(os.path.basename(urlparse(image_url).path))
else:
image_id = None
# Insert into rebrickable_parts with BrickLink color info
sql.execute('rebrickable_parts/insert_with_preloaded_data', parameters={
'part': part_info['part_num'],
'color_id': full_color_info['color_id'] if full_color_info else color_id,
'color_name': full_color_info['name'] if full_color_info else '',
'color_rgb': full_color_info['rgb'] if full_color_info else None,
'color_transparent': full_color_info['is_trans'] if full_color_info else None,
'bricklink_color_id': full_color_info.get('bricklink_color_id') if full_color_info else None,
'bricklink_color_name': full_color_info.get('bricklink_color_name') if full_color_info else None,
'name': part_info['name'],
'image': image_url,
'image_id': image_id,
'url': part_info['part_url']
})
except Exception as e:
error_msg = str(e)
# Provide helpful error message for printed/decorated parts
if '404' in error_msg or 'Not Found' in error_msg:
base_part = ''.join(c for c in part_num if c.isdigit())
if base_part and base_part != part_num:
raise ErrorException(
f'Part {part_num} with color {color_id} not found in Rebrickable. '
f'This appears to be a printed/decorated part. '
f'Try using the base part number: {base_part}'
)
else:
raise ErrorException(
f'Part {part_num} with color {color_id} not found in Rebrickable. '
f'Please verify the part number is correct.'
)
else:
raise ErrorException(
f'Part {part_num} with color {color_id} not found in Rebrickable: {error_msg}'
)
else:
# Part already exists in rebrickable_parts, get the image URL
result = sql.fetchone('rebrickable_parts/select/image_by_part_color', parameters={'part': part_num, 'color_id': color_id})
if result and result[0]:
image_url = result[0]
# Generate UUID and insert individual part
socket.auto_progress(message='Adding part to collection')
part_id = str(uuid4())
# Get storage and purchase location
storage = BrickSetStorageList.get(
data.get('storage', ''),
allow_none=True
)
purchase_location = BrickSetPurchaseLocationList.get(
data.get('purchase_location', ''),
allow_none=True
)
# Set fields
self.fields.id = part_id
self.fields.part = part_num
self.fields.color = color_id
self.fields.quantity = quantity
self.fields.missing = 0
self.fields.damaged = 0
self.fields.checked = 0
self.fields.description = data.get('description', '')
self.fields.lot_id = None # Single parts are not in a lot
self.fields.storage = storage.fields.id if storage else None
self.fields.purchase_location = purchase_location.fields.id if purchase_location else None
self.fields.purchase_date = data.get('purchase_date', None)
self.fields.purchase_price = data.get('purchase_price', None)
# Insert into database
self.insert(commit=False, no_defer=True)
# Save owners
owners: list[str] = list(data.get('owners', []))
for owner_id in owners:
owner = BrickSetOwnerList.get(owner_id)
owner.update_individual_part_state(self, state=True)
# Save tags
tags: list[str] = list(data.get('tags', []))
for tag_id in tags:
tag = BrickSetTagList.get(tag_id)
tag.update_individual_part_state(self, state=True)
# Commit
sql.connection.commit()
# Download image if we have a URL
if image_url:
try:
self.download_image(image_url)
except Exception as e:
# Don't fail the whole operation if image download fails
logger.warning('Could not download image for part {part_num} color {color_id}: {error}'.format(
part_num=part_num,
color_id=color_id,
error=e
))
# Get color name for success message
color_name = 'Unknown'
if color_data and color_data.get('color_name'):
color_name = color_data.get('color_name')
elif full_color_info and full_color_info.get('name'):
color_name = full_color_info.get('name')
# Generate link to part details page
part_url = url_for('part.details', part=part_num, color=color_id)
socket.complete(
message=f'Successfully added part {part_num} in {color_name} (<a href="{part_url}">View details</a>)'
)
return True
except Exception as e:
error_msg = str(e)
if 'Individual parts system is disabled' in error_msg:
socket.fail(message=error_msg)
else:
socket.fail(
message=f'Could not add individual part: {error_msg}'
)
if not isinstance(e, (NotFoundException, ErrorException)):
logger.debug(traceback.format_exc())
return False
# Create multiple individual parts (bulk mode - no lot)
def create_bulk(self, socket: 'BrickSocket', data: dict[str, Any], /) -> bool:
"""
Create multiple individual parts without creating a lot.
Expected data format:
{
'cart': [
{
'part': '3001',
'part_name': 'Brick 2 x 4',
'color_id': 1,
'color_name': 'White',
'quantity': 10,
'color_info': {...}
},
...
],
'storage': 'storage_id',
'purchase_location': 'purchase_location_id',
'purchase_date': timestamp,
'purchase_price': 0.0,
'owners': ['owner_id1', ...],
'tags': ['tag_id1', ...]
}
"""
try:
# Validate cart data
cart = data.get('cart', [])
if not cart or not isinstance(cart, list):
raise ErrorException('Cart is empty or invalid')
socket.auto_progress(
message=f'Adding {len(cart)} individual parts',
increment_total=True
)
# Get storage
from .set_list import BrickSetStorageList, BrickSetPurchaseLocationList, BrickSetOwnerList, BrickSetTagList
storage = BrickSetStorageList.get(
data.get('storage', ''),
allow_none=True
)
storage_id = storage.fields.id if storage else None
# Get purchase location
purchase_location = BrickSetPurchaseLocationList.get(
data.get('purchase_location', ''),
allow_none=True
)
purchase_location_id = purchase_location.fields.id if purchase_location else None
# Get purchase info
purchase_date = data.get('purchase_date', None)
purchase_price = data.get('purchase_price', None)
# Get owners and tags
owners: list[str] = list(data.get('owners', []))
tags: list[str] = list(data.get('tags', []))
# Add all parts from cart
parts_added = 0
for idx, cart_item in enumerate(cart):
part_num = cart_item.get('part')
color_id = cart_item.get('color_id')
quantity = cart_item.get('quantity', 1)
color_info = cart_item.get('color_info', {})
socket.auto_progress(
message=f'Adding part {idx + 1}/{len(cart)}: {part_num} in {cart_item.get("color_name", "unknown color")}',
increment_total=True
)
# Create individual part with no lot_id
part_uuid = str(uuid4())
# Ensure color exists and get full color info (including RGB)
full_color_info = IndividualPart.get_or_fetch_color(color_id)
# Insert the part
sql = BrickSQL()
# Ensure part/color combination exists in rebrickable_parts (same as lot creation)
try:
# Check if part exists
result = sql.fetchone('rebrickable_parts/check_exists', parameters={'part': part_num, 'color_id': color_id})
exists = result[0] > 0
if not exists:
# Insert part data
part_name = cart_item.get('part_name', '')
color_name = cart_item.get('color_name', '')
image_url = color_info.get('part_img_url', '')
# Extract image_id from element_ids or URL
element_ids = color_info.get('elements', [])
if element_ids and len(element_ids) > 0:
image_id = str(element_ids[0])
elif image_url:
image_id, _ = os.path.splitext(os.path.basename(urlparse(image_url).path))
else:
image_id = None
# Use full_color_info for RGB and transparency data (same as single-part add)
sql.execute('rebrickable_parts/insert_part_color', parameters={
'part': part_num,
'name': part_name,
'color_id': color_id,
'color_name': color_name,
'color_rgb': full_color_info.get('rgb') if full_color_info else '',
'color_transparent': full_color_info.get('is_trans') if full_color_info else False,
'image': image_url,
'image_id': image_id,
'url': current_app.config['REBRICKABLE_LINK_PART_PATTERN'].format(part=part_num, color=color_id),
'bricklink_color_id': full_color_info.get('bricklink_color_id') if full_color_info else None,
'bricklink_color_name': full_color_info.get('bricklink_color_name') if full_color_info else None
})
except Exception as e:
logger.warning('Could not ensure part data for {part_num}/{color_id}: {error}'.format(
part_num=part_num,
color_id=color_id,
error=e
))
# Insert individual part
sql.execute(
'individual_part/insert',
parameters={
'id': part_uuid,
'part': part_num,
'color': color_id,
'quantity': quantity,
'lot_id': None, # No lot - this is bulk individual parts mode
'storage': storage_id,
'purchase_location': purchase_location_id,
'purchase_date': purchase_date,
'purchase_price': purchase_price,
'description': None,
'missing': 0,
'damaged': 0,
'checked': False
}
)
# Add owners
for owner_id in owners:
owner = BrickSetOwnerList.get(owner_id)
if owner:
sql.execute(
'individual_part/metadata/owner/insert',
parameters={
'part_id': part_uuid,
'owner_id': owner_id
}
)
# Add tags
for tag_id in tags:
tag = BrickSetTagList.get(tag_id)
if tag:
sql.execute(
'individual_part/metadata/tag/insert',
parameters={
'part_id': part_uuid,
'tag_id': tag_id
}
)
# Download part image if available
image_url = color_info.get('part_img_url', '')
if image_url:
try:
self.download_image(image_url)
except Exception as e:
# Don't fail the whole operation if image download fails
logger.warning('Could not download image for part {part_num} color {color_id}: {error}'.format(
part_num=part_num,
color_id=color_id,
error=e
))
parts_added += 1
# Commit all changes
sql = BrickSQL()
sql.commit()
socket.auto_progress(
message=f'Successfully added {parts_added} individual parts',
increment_total=True
)
# Generate link to individual parts list
from flask import url_for
parts_url = url_for('individual_part.list')
# Send completion with message and link
socket.complete(
message='Successfully added {count} individual parts. <a href="{url}">View individual parts</a>'.format(
count=parts_added,
url=parts_url
),
parts_added=parts_added
)
return True
except ErrorException as error:
socket.fail(message=str(error))
return False
except Exception as error:
logger.error('Failed to create bulk individual parts: {error}'.format(error=error))
logger.error(traceback.format_exc())
socket.fail(message='Failed to add individual parts: {error}'.format(error=str(error)))
return False
# Update a field
def update_field(self, field: str, value: Any, /) -> Self:
setattr(self.fields, field, value)
# Use a specific update query for each field
sql = BrickSQL()
sql.execute_and_commit('individual_part/update/field', parameters={
'id': self.fields.id,
'value': value
}, field=field)
return self
# Update problem count (missing/damaged)
def update_problem(self, problem: str, data: dict[str, Any], /) -> int:
# Handle both 'value' key and 'amount' key
amount: str | int = data.get('value', data.get('amount', '')) # type: ignore
# We need a positive integer
try:
if amount == '':
amount = 0
amount = int(amount)
if amount < 0:
amount = 0
except Exception:
raise ErrorException(f'"{amount}" is not a valid integer')
if amount < 0:
raise ErrorException('Cannot set a negative amount')
setattr(self.fields, problem, amount)
BrickSQL().execute_and_commit(
f'individual_part/update/{problem}',
parameters={
'id': self.fields.id,
problem: amount
}
)
return amount
# Update checked status
def update_checked(self, data: dict[str, Any], /) -> bool:
# Handle both direct 'checked' key and changer.js 'value' key format
if data:
checked = data.get('checked', data.get('value', False))
else:
checked = False
checked = bool(checked)
self.fields.checked = 1 if checked else 0
BrickSQL().execute_and_commit(
'individual_part/update/checked',
parameters={
'id': self.fields.id,
'checked': self.fields.checked
}
)
return checked
+100
View File
@@ -0,0 +1,100 @@
import logging
from typing import Self, TYPE_CHECKING
from .record_list import BrickRecordList
from .individual_part import IndividualPart
if TYPE_CHECKING:
from .set_storage import BrickSetStorage
logger = logging.getLogger(__name__)
# List of individual parts
class IndividualPartList(BrickRecordList):
# Queries
list_query: str = 'individual_part/list/all'
by_part_query: str = 'individual_part/list/by_part'
by_color_query: str = 'individual_part/list/by_color'
by_part_and_color_query: str = 'individual_part/list/by_part_and_color'
by_storage_query: str = 'individual_part/list/by_storage'
using_storage_query: str = 'individual_part/list/using_storage'
using_purchase_location_query: str = 'individual_part/list/using_purchase_location'
without_storage_query: str = 'individual_part/list/without_storage'
problem_query: str = 'individual_part/list/problem'
# Get all individual parts
def all(self, /) -> Self:
self.list(override_query=self.list_query)
return self
# Get individual parts by part number
def by_part(self, part: str, /) -> Self:
self.fields.part = part
self.list(override_query=self.by_part_query)
return self
# Get individual parts by color
def by_color(self, color_id: int, /) -> Self:
self.fields.color = color_id
self.list(override_query=self.by_color_query)
return self
# Get individual parts by part number and color
def by_part_and_color(self, part: str, color_id: int, /) -> Self:
self.fields.part = part
self.fields.color = color_id
self.list(override_query=self.by_part_and_color_query)
return self
# Get individual parts by storage location
def by_storage(self, storage: 'BrickSetStorage', /) -> Self:
self.fields.storage = storage.fields.id
self.list(override_query=self.by_storage_query)
return self
# Get individual parts using a specific storage location
def using_storage(self, storage: 'BrickSetStorage', /) -> Self:
self.fields.storage = storage.fields.id
self.list(override_query=self.using_storage_query)
return self
# Get individual parts using a specific purchase location
def using_purchase_location(self, purchase_location: 'BrickSetPurchaseLocation', /) -> Self:
self.fields.purchase_location = purchase_location.fields.id
self.list(override_query=self.using_purchase_location_query)
return self
# Get individual parts without storage
def without_storage(self, /) -> Self:
self.list(override_query=self.without_storage_query)
return self
# Get individual parts with problems (missing or damaged)
def with_problems(self, /) -> Self:
self.list(override_query=self.problem_query)
return self
# Base individual part list
def list(
self,
/,
*,
override_query: str | None = None,
order: str | None = None,
limit: int | None = None,
**context,
) -> None:
# Load the individual parts from the database
for record in super().select(
override_query=override_query,
order=order,
limit=limit,
**context
):
individual_part = IndividualPart(record=record)
self.records.append(individual_part)
# Set the record class
def set_record_class(self, /) -> None:
self.record_class = IndividualPart
+302
View File
@@ -0,0 +1,302 @@
import logging
import os
import traceback
from datetime import datetime
from typing import Any, Self, TYPE_CHECKING
from urllib.parse import urlparse
from uuid import uuid4
from flask import (
current_app,
url_for,
)
from .exceptions import NotFoundException, DatabaseException, ErrorException
from .individual_part import IndividualPart
from .record import BrickRecord, format_timestamp
from .set_owner_list import BrickSetOwnerList
from .set_purchase_location_list import BrickSetPurchaseLocationList
from .set_storage_list import BrickSetStorageList
from .set_tag_list import BrickSetTagList
from .sql import BrickSQL
if TYPE_CHECKING:
from .socket import BrickSocket
logger = logging.getLogger(__name__)
# Individual part lot (collection/batch of individual parts added together)
class IndividualPartLot(BrickRecord):
# Queries
select_query: str = 'individual_part_lot/select/by_id'
insert_query: str = 'individual_part_lot/insert'
def __init__(
self,
/,
*,
record: Any | None = None
):
super().__init__()
# Ingest the record if it has one
if record is not None:
self.ingest(record)
# Select a specific lot by UUID
def select_by_id(self, id: str, /) -> Self:
from .set_owner_list import BrickSetOwnerList
from .set_tag_list import BrickSetTagList
self.fields.id = id
if not self.select(
override_query=self.select_query,
owners=BrickSetOwnerList.as_columns(),
tags=BrickSetTagList.as_columns(),
# Note: Part lots don't have statuses (by design)
# Statuses are meant for tracking set completion/verification, which doesn't apply
# to loose part collections. Individual parts within lots can still be marked as
# missing/damaged/checked through the parts inventory system.
):
raise NotFoundException(
'Individual part lot with id "{id}" not found'.format(id=id)
)
return self
# Delete a lot and all its parts
def delete(self, /) -> None:
BrickSQL().executescript(
'individual_part_lot/delete',
id=self.fields.id
)
# Get the URL for this lot
def url(self, /) -> str:
return url_for('individual_part.lot_details', lot_id=self.fields.id)
# String representation for debugging
def __repr__(self, /) -> str:
name = getattr(self.fields, 'name', 'Unnamed') or 'Unnamed'
lot_id = getattr(self.fields, 'id', 'unknown')
# Try to get part_count if available (from optimized query)
part_count = getattr(self.fields, 'part_count', '?')
return f'<IndividualPartLot "{name}" ({part_count} parts) id:{lot_id[:8]}...>'
# Format created date
def created_date_formatted(self, /) -> str:
return format_timestamp(self.fields.created_date)
# Format purchase date
def purchase_date_formatted(self, /) -> str:
return format_timestamp(self.fields.purchase_date)
# Format purchase price
def purchase_price(self, /) -> str:
from flask import current_app
if self.fields.purchase_price is not None:
return '{price}{currency}'.format(
price=self.fields.purchase_price,
currency=current_app.config['PURCHASE_CURRENCY']
)
else:
return ''
# Get all parts in this lot
def parts(self, /) -> list['IndividualPart']:
sql = BrickSQL()
parts_data = sql.fetchall('individual_part_lot/list/parts', parameters={'lot_id': self.fields.id})
# Convert to list of IndividualPart objects using ingest()
return [IndividualPart(record=record) for record in parts_data]
# Get total quantity of all parts in this lot
def total_quantity(self, /) -> int:
parts = self.parts()
return sum(part.fields.quantity for part in parts)
# Create a new lot with parts from cart
def create(self, socket: 'BrickSocket', data: dict[str, Any], /) -> bool:
"""
Create a new individual part lot with multiple parts.
Expected data format:
{
'cart': [
{
'part': '3001',
'part_name': 'Brick 2 x 4',
'color_id': 1,
'color_name': 'White',
'quantity': 10,
'color_info': {...}
},
...
],
'name': 'Optional lot name',
'description': 'Optional lot description',
'storage': 'storage_id',
'purchase_location': 'purchase_location_id',
'purchase_date': timestamp,
'purchase_price': 0.0,
'owners': ['owner_id1', ...],
'tags': ['tag_id1', ...]
}
"""
try:
# Validate cart data
cart = data.get('cart', [])
if not cart or not isinstance(cart, list):
raise ErrorException('Cart is empty or invalid')
socket.auto_progress(
message=f'Creating lot with {len(cart)} parts',
increment_total=True
)
# Generate UUID for the lot
lot_id = str(uuid4())
self.fields.id = lot_id
# Set lot metadata
self.fields.name = data.get('name', None)
self.fields.description = data.get('description', None)
self.fields.created_date = datetime.now().timestamp()
# Get storage
storage = BrickSetStorageList.get(
data.get('storage', ''),
allow_none=True
)
self.fields.storage = storage.fields.id if storage else None
# Get purchase location
purchase_location = BrickSetPurchaseLocationList.get(
data.get('purchase_location', ''),
allow_none=True
)
self.fields.purchase_location = purchase_location.fields.id if purchase_location else None
# Set purchase info
self.fields.purchase_date = data.get('purchase_date', None)
self.fields.purchase_price = data.get('purchase_price', None)
# Insert the lot record
socket.auto_progress(
message='Inserting lot into database',
increment_total=True
)
self.insert(commit=False)
# Commit the lot so parts can reference it
sql = BrickSQL()
sql.commit()
# Save owners using the metadata update methods
owners: list[str] = list(data.get('owners', []))
for owner_id in owners:
owner = BrickSetOwnerList.get(owner_id)
if owner:
owner.update_individual_part_lot_state(self, state=True, commit=False)
# Save tags using the metadata update methods
tags: list[str] = list(data.get('tags', []))
for tag_id in tags:
tag = BrickSetTagList.get(tag_id)
if tag:
tag.update_individual_part_lot_state(self, state=True, commit=False)
# Add all parts from cart
socket.auto_progress(
message=f'Adding {len(cart)} parts to lot',
increment_total=True
)
for idx, cart_item in enumerate(cart):
part_num = cart_item.get('part')
color_id = cart_item.get('color_id')
quantity = cart_item.get('quantity', 1)
color_info = cart_item.get('color_info', {})
socket.auto_progress(
message=f'Adding part {idx + 1}/{len(cart)}: {part_num} in {cart_item.get("color_name", "unknown color")}',
increment_total=True
)
# Create individual part with lot_id
part_uuid = str(uuid4())
sql = BrickSQL()
# Ensure color and part/color combination exist in rebrickable tables
IndividualPart.get_or_fetch_color(color_id)
part_name = cart_item.get('part_name', '')
color_name = cart_item.get('color_name', '')
image_url = color_info.get('part_img_url', '')
# Extract image_id from element_ids or URL
element_ids = color_info.get('elements', [])
if element_ids and len(element_ids) > 0:
image_id = str(element_ids[0])
elif image_url:
image_id, _ = os.path.splitext(os.path.basename(urlparse(image_url).path))
else:
image_id = None
sql.execute('rebrickable_parts/insert_part_color', parameters={
'part': part_num,
'name': part_name,
'color_id': color_id,
'color_name': color_name,
'color_rgb': color_info.get('rgb', ''),
'color_transparent': color_info.get('is_trans', False),
'image': image_url,
'image_id': image_id,
'url': current_app.config['REBRICKABLE_LINK_PART_PATTERN'].format(part=part_num, color=color_id),
'bricklink_color_id': color_info.get('bricklink_color_id', None),
'bricklink_color_name': color_info.get('bricklink_color_name', None)
})
# Commit so the foreign key constraint can be satisfied
sql.commit()
# Now insert the part with lot_id (NO individual metadata - inherited from lot)
sql.execute('individual_part/insert_with_lot', parameters={
'id': part_uuid,
'part': part_num,
'color': color_id,
'quantity': quantity,
'lot_id': lot_id
})
# Commit all changes
socket.auto_progress(
message='Committing changes to database',
increment_total=True
)
sql.commit()
socket.auto_progress(
message=f'Lot created successfully with {len(cart)} parts',
increment_total=True
)
# Complete with success message and lot URL
lot_url = self.url()
socket.complete(
message=f'Successfully created lot with {len(cart)} parts. <a href="{lot_url}">View lot</a>',
data={
'lot_id': lot_id,
'lot_url': lot_url
}
)
return True
except ErrorException as e:
socket.fail(message=str(e))
logger.error('Error creating lot: {error}'.format(error=e))
return False
except Exception as e:
socket.fail(message='Unexpected error creating lot: {error}'.format(error=str(e)))
logger.error('Unexpected error creating lot: {error}'.format(error=e))
logger.error(traceback.format_exc())
return False
+86
View File
@@ -0,0 +1,86 @@
import logging
from typing import Self, TYPE_CHECKING
from .record_list import BrickRecordList
from .individual_part_lot import IndividualPartLot
if TYPE_CHECKING:
from .set_storage import BrickSetStorage
logger = logging.getLogger(__name__)
# List of individual part lots
class IndividualPartLotList(BrickRecordList):
# Queries
list_query: str = 'individual_part_lot/list/all'
by_part_and_color_query: str = 'individual_part_lot/list/by_part_and_color'
by_storage_query: str = 'individual_part_lot/list/by_storage'
using_storage_query: str = 'individual_part_lot/list/using_storage'
using_purchase_location_query: str = 'individual_part_lot/list/using_purchase_location'
without_storage_query: str = 'individual_part_lot/list/without_storage'
problem_query: str = 'individual_part_lot/list/problem'
# Get all individual part lots
def all(self, /) -> Self:
self.list(override_query=self.list_query)
return self
# Base individual part lot list
def list(
self,
/,
*,
override_query: str | None = None,
order: str | None = None,
limit: int | None = None,
**context,
) -> None:
# Load the individual part lots from the database
for record in super().select(
override_query=override_query,
order=order,
limit=limit,
**context
):
lot = IndividualPartLot(record=record)
self.records.append(lot)
# Set the record class
def set_record_class(self, /) -> None:
self.record_class = IndividualPartLot
# Get individual part lots containing a specific part and color
def by_part_and_color(self, part: str, color_id: int, /) -> Self:
self.fields.part = part
self.fields.color = color_id
self.list(override_query='individual_part_lot/list/by_part_and_color')
return self
# Get individual part lots by storage location
def by_storage(self, storage: 'BrickSetStorage', /) -> Self:
self.fields.storage = storage.fields.id
self.list(override_query=self.by_storage_query)
return self
# Get individual part lots using a specific storage location
def using_storage(self, storage: 'BrickSetStorage', /) -> Self:
self.fields.storage = storage.fields.id
self.list(override_query=self.using_storage_query)
return self
# Get individual part lots using a specific purchase location
def using_purchase_location(self, purchase_location: 'BrickSetPurchaseLocation', /) -> Self:
self.fields.purchase_location = purchase_location.fields.id
self.list(override_query=self.using_purchase_location_query)
return self
# Get individual part lots without storage
def without_storage(self, /) -> Self:
self.list(override_query=self.without_storage_query)
return self
# Get individual part lots with problems (containing parts with missing or damaged items)
def with_problems(self, /) -> Self:
self.list(override_query=self.problem_query)
return self
+76 -27
View File
@@ -13,7 +13,6 @@ import requests
from werkzeug.datastructures import FileStorage
from werkzeug.utils import secure_filename
import re
import cloudscraper
from .exceptions import ErrorException, DownloadException
if TYPE_CHECKING:
@@ -101,16 +100,39 @@ class BrickInstructions(object):
# Skip if we already have it
if os.path.isfile(target):
pdf_url = self.url()
return self.socket.complete(
message=f"File {self.filename} already exists, skipped"
message=f'File {self.filename} already exists, skipped - <a href="{pdf_url}" target="_blank" class="btn btn-sm btn-primary ms-2"><i class="ri-external-link-line"></i> Open PDF</a>'
)
# Fetch PDF via cloudscraper (to bypass Cloudflare)
scraper = cloudscraper.create_scraper()
scraper.headers.update({
"User-Agent": current_app.config['REBRICKABLE_USER_AGENT']
# Use plain requests instead of cloudscraper
session = requests.Session()
session.headers.update({
'User-Agent': current_app.config['REBRICKABLE_USER_AGENT'],
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
'DNT': '1',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'same-origin',
'Cache-Control': 'max-age=0'
})
resp = scraper.get(path, stream=True)
# Visit the set's instructions listing page first to establish session cookies
set_number = None
if self.rebrickable:
set_number = self.rebrickable.fields.set
elif self.set:
set_number = self.set
if set_number:
instructions_page = f"https://rebrickable.com/instructions/{set_number}/"
session.get(instructions_page)
session.headers.update({"Referer": instructions_page})
resp = session.get(path, stream=True, allow_redirects=True)
if not resp.ok:
raise DownloadException(f"Failed to download: HTTP {resp.status_code}")
@@ -141,8 +163,9 @@ class BrickInstructions(object):
# Done!
logger.info(f"Downloaded {self.filename}")
pdf_url = self.url()
self.socket.complete(
message=f"File {self.filename} downloaded ({self.human_size()})"
message=f'File {self.filename} downloaded ({self.human_size()}) - <a href="{pdf_url}" target="_blank" class="btn btn-sm btn-primary ms-2"><i class="ri-external-link-line"></i> Open PDF</a>'
)
except Exception as e:
@@ -170,11 +193,16 @@ class BrickInstructions(object):
if filename is None:
filename = self.filename
return os.path.join(
current_app.static_folder, # type: ignore
current_app.config['INSTRUCTIONS_FOLDER'],
filename
)
folder = current_app.config['INSTRUCTIONS_FOLDER']
# If folder is absolute, use it directly
# Otherwise, make it relative to app root (not static folder)
if os.path.isabs(folder):
base_path = folder
else:
base_path = os.path.join(current_app.root_path, folder)
return os.path.join(base_path, filename)
# Rename an instructions file
def rename(self, filename: str, /) -> None:
@@ -215,10 +243,16 @@ class BrickInstructions(object):
folder: str = current_app.config['INSTRUCTIONS_FOLDER']
# Compute the path
path = os.path.join(folder, self.filename)
return url_for('static', filename=path)
# Determine which route to use based on folder path
# If folder contains 'data' (new structure), use data route
# Otherwise use static route (legacy)
if 'data' in folder:
return url_for('data.serve_data_file', folder='instructions', filename=self.filename)
else:
# Legacy: folder is relative to static/
folder_clean = folder.removeprefix('static/')
path = os.path.join(folder_clean, self.filename)
return url_for('static', filename=path)
# Return the icon depending on the extension
def icon(self, /) -> str:
@@ -235,34 +269,49 @@ class BrickInstructions(object):
@staticmethod
def find_instructions(set: str, /) -> list[Tuple[str, str]]:
"""
Scrape Rebrickables HTML and return a list of
Scrape Rebrickable's HTML and return a list of
(filename_slug, download_url). Duplicate slugs get _1, _2, …
"""
page_url = f"https://rebrickable.com/instructions/{set}/"
logger.debug(f"[find_instructions] fetching HTML from {page_url!r}")
# Solve Cloudflares challenge
scraper = cloudscraper.create_scraper()
scraper.headers.update({'User-Agent': current_app.config['REBRICKABLE_USER_AGENT']})
resp = scraper.get(page_url)
# Use plain requests instead of cloudscraper
session = requests.Session()
session.headers.update({
'User-Agent': current_app.config['REBRICKABLE_USER_AGENT'],
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
'DNT': '1',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'none',
'Cache-Control': 'max-age=0'
})
resp = session.get(page_url)
if not resp.ok:
raise ErrorException(f'Failed to load instructions page for {set}. HTTP {resp.status_code}')
soup = BeautifulSoup(resp.content, 'html.parser')
# Match download links with or without query parameters (e.g., ?cfe=timestamp&cfk=key)
link_re = re.compile(r'^/instructions/\d+/.+/download/')
raw: list[tuple[str, str]] = []
for a in soup.find_all('a', href=link_re):
img = a.find('img', alt=True)
if not img or set not in img['alt']:
img = a.find('img', alt=True) # type: ignore
if not img or set not in img['alt']: # type: ignore
continue
# Turn the alt text into a slug
alt_text = img['alt'].removeprefix('LEGO Building Instructions for ')
alt_text = img['alt'].removeprefix('LEGO Building Instructions for ') # type: ignore
slug = re.sub(r'[^A-Za-z0-9]+', '-', alt_text).strip('-')
# Build the absolute download URL
download_url = urljoin('https://rebrickable.com', a['href'])
# Build the absolute download URL - this preserves query parameters
# BeautifulSoup's a['href'] includes the full href with ?cfe=...&cfk=... params
download_url = urljoin('https://rebrickable.com', a['href']) # type: ignore
logger.debug(f"[find_instructions] Found download link: {download_url}")
raw.append((slug, download_url))
if not raw:
+8 -5
View File
@@ -36,11 +36,14 @@ class BrickInstructionsList(object):
# Try to list the files in the instruction folder
try:
# Make a folder relative to static
folder: str = os.path.join(
current_app.static_folder, # type: ignore
current_app.config['INSTRUCTIONS_FOLDER'],
)
folder_config: str = current_app.config['INSTRUCTIONS_FOLDER']
# If folder is absolute, use it directly
# Otherwise, make it relative to app root (not static folder)
if os.path.isabs(folder_config):
folder = folder_config
else:
folder = os.path.join(current_app.root_path, folder_config)
for file in os.scandir(folder):
instruction = BrickInstructions(file)
+193 -9
View File
@@ -9,6 +9,8 @@ from .exceptions import DatabaseException, ErrorException, NotFoundException
from .record import BrickRecord
from .sql import BrickSQL
if TYPE_CHECKING:
from .individual_minifigure import IndividualMinifigure
from .individual_part import IndividualPart
from .set import BrickSet
logger = logging.getLogger(__name__)
@@ -106,6 +108,26 @@ class BrickMetadata(BrickRecord):
metadata_id=self.fields.id
)
# URL to change the selected state of this metadata item for an individual part
def url_for_individual_part_state(self, part_id: str, /) -> str:
# Replace 'set' with 'individual_part' in the endpoint name
endpoint = self.set_state_endpoint.replace('set.', 'individual_part.')
return url_for(
endpoint,
id=part_id,
metadata_id=self.fields.id
)
# URL to change the selected state of this metadata item for an individual minifigure
def url_for_individual_minifigure_state(self, minifigure_id: str, /) -> str:
# Replace 'set' with 'individual_minifigure' in the endpoint name
endpoint = self.set_state_endpoint.replace('set.', 'individual_minifigure.')
return url_for(
endpoint,
id=minifigure_id,
metadata_id=self.fields.id
)
# Select a specific metadata (with an id)
def select_specific(self, id: str, /) -> Self:
# Save the parameters to the fields
@@ -182,7 +204,8 @@ class BrickMetadata(BrickRecord):
/,
*,
json: Any | None = None,
state: Any | None = None
state: Any | None = None,
commit: bool = True
) -> Any:
if state is None and json is not None:
state = json.get('value', False)
@@ -191,16 +214,24 @@ class BrickMetadata(BrickRecord):
parameters['set_id'] = brickset.fields.id
parameters['state'] = state
rows, _ = BrickSQL().execute_and_commit(
self.update_set_state_query,
parameters=parameters,
name=self.as_column(),
)
if commit:
rows, _ = BrickSQL().execute_and_commit(
self.update_set_state_query,
parameters=parameters,
name=self.as_column(),
)
else:
rows, _ = BrickSQL().execute(
self.update_set_state_query,
parameters=parameters,
defer=True,
name=self.as_column(),
)
if rows != 1:
raise DatabaseException('Could not update the {kind} "{name}" state for set {set} ({id})'.format( # noqa: E501
# When deferred, rows will be -1, so skip the check
if commit and rows != 1:
raise DatabaseException('Could not update the {kind} state for set {set} ({id})'.format(
kind=self.kind,
name=self.fields.name,
set=brickset.fields.set,
id=brickset.fields.id,
))
@@ -261,3 +292,156 @@ class BrickMetadata(BrickRecord):
))
return value
# Update the selected state of this metadata item for an individual part
def update_individual_part_state(
self,
individual_part: 'IndividualPart',
/,
*,
json: Any | None = None,
state: Any | None = None,
commit: bool = True
) -> Any:
if state is None and json is not None:
state = json.get('value', False)
parameters = self.sql_parameters()
parameters['set_id'] = individual_part.fields.id # set_id parameter accepts any entity id
parameters['state'] = state
# Use the same set query (bricktracker_set_owners/tags/statuses tables accept any entity id)
query_name = self.update_set_state_query
if commit:
rows, _ = BrickSQL().execute_and_commit(
query_name,
parameters=parameters,
name=self.as_column(),
)
else:
rows, _ = BrickSQL().execute(
query_name,
parameters=parameters,
defer=True,
name=self.as_column(),
)
# When deferred, rows will be -1, so skip the check
if commit and rows != 1:
raise DatabaseException('Could not update the {kind} state for individual part {part_id}'.format(
kind=self.kind,
part_id=individual_part.fields.id,
))
# Info
logger.info('{kind} "{name}" state changed to "{state}" for individual part {part_id}'.format(
kind=self.kind,
name=self.fields.name,
state=state,
part_id=individual_part.fields.id,
))
return state
# Update the selected state of this metadata item for an individual minifigure
def update_individual_minifigure_state(
self,
individual_minifigure: 'IndividualMinifigure',
/,
*,
json: Any | None = None,
state: Any | None = None,
commit: bool = True
) -> Any:
if state is None and json is not None:
state = json.get('value', False)
parameters = self.sql_parameters()
parameters['set_id'] = individual_minifigure.fields.id # set_id parameter accepts any entity id
parameters['state'] = state
# Use the same set query (bricktracker_set_owners/tags/statuses tables accept any entity id)
query_name = self.update_set_state_query
if commit:
rows, _ = BrickSQL().execute_and_commit(
query_name,
parameters=parameters,
name=self.as_column(),
)
else:
rows, _ = BrickSQL().execute(
query_name,
parameters=parameters,
defer=True,
name=self.as_column(),
)
# When deferred, rows will be -1, so skip the check
if commit and rows != 1:
raise DatabaseException('Could not update the {kind} state for individual minifigure {minifigure_id}'.format(
kind=self.kind,
minifigure_id=individual_minifigure.fields.id,
))
# Info
logger.info('{kind} "{name}" state changed to "{state}" for individual minifigure {minifigure_id}'.format(
kind=self.kind,
name=self.fields.name,
state=state,
minifigure_id=individual_minifigure.fields.id,
))
return state
# Update the selected state of this metadata item for an individual part lot
def update_individual_part_lot_state(
self,
individual_part_lot: 'IndividualPartLot',
/,
*,
json: Any | None = None,
state: Any | None = None,
commit: bool = True
) -> Any:
if state is None and json is not None:
state = json.get('value', False)
parameters = self.sql_parameters()
parameters['set_id'] = individual_part_lot.fields.id # set_id parameter accepts any entity id
parameters['state'] = state
# Use the same set query (bricktracker_set_owners/tags tables accept any entity id)
query_name = self.update_set_state_query
if commit:
rows, _ = BrickSQL().execute_and_commit(
query_name,
parameters=parameters,
name=self.as_column(),
)
else:
rows, _ = BrickSQL().execute(
query_name,
parameters=parameters,
defer=True,
name=self.as_column(),
)
# When deferred, rows will be -1, so skip the check
if commit and rows != 1:
raise DatabaseException('Could not update the {kind} state for individual part lot {lot_id}'.format(
kind=self.kind,
lot_id=individual_part_lot.fields.id,
))
# Info
logger.info('{kind} "{name}" state changed to "{state}" for individual part lot {lot_id}'.format(
kind=self.kind,
name=self.fields.name,
state=state,
lot_id=individual_part_lot.fields.id,
))
return state
+30
View File
@@ -111,6 +111,16 @@ class BrickMetadataList(BrickRecordList[T]):
in new.filter(**kwargs)
])
# Return the items as a dictionary mapping column names to UUIDs
@classmethod
def as_column_mapping(cls, /, **kwargs) -> dict:
new = cls.new()
return {
record.as_column(): record.fields.id
for record in new.filter(**kwargs)
}
# Grab a specific status
@classmethod
def get(cls, id: str | None, /, *, allow_none: bool = False) -> T:
@@ -174,3 +184,23 @@ class BrickMetadataList(BrickRecordList[T]):
cls.set_value_endpoint,
id=id,
)
# URL to change the selected value of this metadata item for an individual part
@classmethod
def url_for_individual_part_value(cls, part_id: str, /) -> str:
# Replace 'set' with 'individual_part' in the endpoint name
endpoint = cls.set_value_endpoint.replace('set.', 'individual_part.')
return url_for(
endpoint,
id=part_id,
)
# URL to change the selected value of this metadata item for an individual minifigure
@classmethod
def url_for_individual_minifigure_value(cls, minifigure_id: str, /) -> str:
# Replace 'set' with 'individual_minifigure' in the endpoint name
endpoint = cls.set_value_endpoint.replace('set.', 'individual_minifigure.')
return url_for(
endpoint,
id=minifigure_id,
)
+88
View File
@@ -0,0 +1,88 @@
"""
Migration 0027: Consolidate metadata tables - remove FK constraints from set metadata tables
This migration removes foreign key constraints from bricktracker_set_owners, _tags, and _statuses
so they can accept any entity ID (sets, individual parts, individual minifigures, individual part lots).
Since these tables have dynamically added columns, we need to read the schema and recreate the tables
with all existing columns but without the foreign key constraints.
"""
import logging
from typing import Any, TYPE_CHECKING
if TYPE_CHECKING:
from ..sql import BrickSQL
logger = logging.getLogger(__name__)
def migration_0027(sql: 'BrickSQL') -> dict[str, Any]:
"""
Remove foreign key constraints from set metadata junction tables.
This allows the tables to store metadata for any entity type, not just sets.
"""
tables_to_migrate = [
'bricktracker_set_owners',
'bricktracker_set_tags',
'bricktracker_set_statuses'
]
for table_name in tables_to_migrate:
logger.info('Migrating {table_name} to remove foreign key constraint'.format(
table_name=table_name
))
# Get the current table schema
cursor = sql.cursor.execute(f"PRAGMA table_info({table_name})")
columns = cursor.fetchall()
# Build column definitions for new table (without FK constraint)
column_defs = []
column_names = []
for col in columns:
col_name = col[1]
col_type = col[2]
col_not_null = col[3]
col_default = col[4]
col_pk = col[5]
column_names.append(f'"{col_name}"')
col_def = f'"{col_name}" {col_type}'
if col_pk:
col_def += ' PRIMARY KEY'
if col_not_null and not col_pk:
if col_default is not None:
col_def += f' NOT NULL DEFAULT {col_default}'
else:
col_def += ' NOT NULL'
elif col_default is not None:
col_def += f' DEFAULT {col_default}'
column_defs.append(col_def)
# Create new table without foreign key constraint
new_table_name = f'{table_name}_new'
create_sql = f'CREATE TABLE "{new_table_name}" ({", ".join(column_defs)})'
logger.debug('Creating new table: {sql}'.format(sql=create_sql))
sql.cursor.execute(create_sql)
# Copy all data
column_list = ', '.join(column_names)
copy_sql = f'INSERT INTO "{new_table_name}" ({column_list}) SELECT {column_list} FROM "{table_name}"'
logger.debug('Copying data: {sql}'.format(sql=copy_sql))
sql.cursor.execute(copy_sql)
# Drop old table
sql.cursor.execute(f'DROP TABLE "{table_name}"')
# Rename new table to old name
sql.cursor.execute(f'ALTER TABLE "{new_table_name}" RENAME TO "{table_name}"')
logger.info('Successfully migrated {table_name}'.format(table_name=table_name))
logger.info('Migration 0027 complete - all set metadata tables now accept any entity ID')
return {}
+7 -6
View File
@@ -33,11 +33,7 @@ class BrickMinifigure(RebrickableMinifigure):
)
)
if not refresh:
# Insert into database
self.insert(commit=False)
# Load the inventory
# Load the inventory (needed to count parts for rebrickable record)
if not BrickPartList.download(
socket,
self.brickset,
@@ -46,9 +42,14 @@ class BrickMinifigure(RebrickableMinifigure):
):
return False
# Insert the rebrickable set into database (after counting parts)
# Insert the rebrickable minifigure into database first (parent record)
# This must happen before inserting into bricktracker_minifigures due to FK constraint
self.insert_rebrickable()
if not refresh:
# Insert into bricktracker_minifigures database (child record)
self.insert(commit=False)
except Exception as e:
socket.fail(
message='Error while importing minifigure {figure} from {set}: {error}'.format( # noqa: E501
+111 -5
View File
@@ -20,8 +20,8 @@ class BrickMinifigureList(BrickRecordList[BrickMinifigure]):
order: str
# Queries
all_query: str = 'minifigure/list/all'
all_by_owner_query: str = 'minifigure/list/all_by_owner'
all_query: str = 'minifigure/list/all_unified'
all_by_owner_query: str = 'minifigure/list/all_by_owner_unified'
damaged_part_query: str = 'minifigure/list/damaged_part'
last_query: str = 'minifigure/list/last'
missing_part_query: str = 'minifigure/list/missing_part'
@@ -43,6 +43,31 @@ class BrickMinifigureList(BrickRecordList[BrickMinifigure]):
return self
# Load all minifigures with problems filter
def all_filtered(self, /, owner_id: str | None = None, problems_filter: str = 'all', theme_id: str = 'all', year: str = 'all', individuals_filter: str = 'all') -> Self:
# Save the owner_id parameter
if owner_id is not None:
self.fields.owner_id = owner_id
context = {}
if problems_filter and problems_filter != 'all':
context['problems_filter'] = problems_filter
if theme_id and theme_id != 'all':
context['theme_id'] = theme_id
if year and year != 'all':
context['year'] = year
if individuals_filter and individuals_filter != 'all':
context['individuals_filter'] = individuals_filter
# Choose query based on whether owner filtering is needed
if owner_id and owner_id != 'all':
query = self.all_by_owner_query
else:
query = self.all_query
self.list(override_query=query, **context)
return self
# Load all minifigures by owner
def all_by_owner(self, owner_id: str | None = None, /) -> Self:
# Save the owner_id parameter
@@ -53,6 +78,84 @@ class BrickMinifigureList(BrickRecordList[BrickMinifigure]):
return self
# Load all minifigures by owner with problems filter
def all_by_owner_filtered(self, /, owner_id: str | None = None, problems_filter: str = 'all', theme_id: str = 'all', year: str = 'all', individuals_filter: str = 'all') -> Self:
# Save the owner_id parameter
self.fields.owner_id = owner_id
context = {}
if problems_filter and problems_filter != 'all':
context['problems_filter'] = problems_filter
if theme_id and theme_id != 'all':
context['theme_id'] = theme_id
if year and year != 'all':
context['year'] = year
if individuals_filter and individuals_filter != 'all':
context['individuals_filter'] = individuals_filter
# Load the minifigures from the database
self.list(override_query=self.all_by_owner_query, **context)
return self
# Load minifigures with pagination support
def all_filtered_paginated(
self,
owner_id: str | None = None,
problems_filter: str = 'all',
theme_id: str = 'all',
year: str = 'all',
individuals_filter: str = 'all',
search_query: str | None = None,
page: int = 1,
per_page: int = 50,
sort_field: str | None = None,
sort_order: str = 'asc'
) -> tuple[Self, int]:
# Prepare filter context
filter_context = {}
if owner_id and owner_id != 'all':
filter_context['owner_id'] = owner_id
list_query = self.all_by_owner_query
else:
list_query = self.all_query
if search_query:
filter_context['search_query'] = search_query
if problems_filter and problems_filter != 'all':
filter_context['problems_filter'] = problems_filter
if theme_id and theme_id != 'all':
filter_context['theme_id'] = theme_id
if year and year != 'all':
filter_context['year'] = year
if individuals_filter and individuals_filter != 'all':
filter_context['individuals_filter'] = individuals_filter
# Field mapping for sorting (using column names from the unified query)
field_mapping = {
'name': '"name"',
'parts': '"number_of_parts"',
'quantity': '"total_quantity"',
'missing': '"total_missing"',
'damaged': '"total_damaged"',
'sets': '"total_sets"'
}
# Use the base pagination method
return self.paginate(
page=page,
per_page=per_page,
sort_field=sort_field,
sort_order=sort_order,
list_query=list_query,
field_mapping=field_mapping,
**filter_context
)
# Minifigures with a part damaged part
def damaged_part(self, part: str, color: int, /) -> Self:
# Save the parameters to the fields
@@ -95,16 +198,19 @@ class BrickMinifigureList(BrickRecordList[BrickMinifigure]):
brickset = None
# Prepare template context for owner filtering
context = {}
context_vars = {}
if hasattr(self.fields, 'owner_id') and self.fields.owner_id is not None:
context['owner_id'] = self.fields.owner_id
context_vars['owner_id'] = self.fields.owner_id
# Merge with any additional context passed in
context_vars.update(context)
# Load the sets from the database
for record in super().select(
override_query=override_query,
order=order,
limit=limit,
**context
**context_vars
):
minifigure = BrickMinifigure(brickset=brickset, record=record)
+1
View File
@@ -15,6 +15,7 @@ NAVBAR: Final[list[dict[str, Any]]] = [
{'e': 'minifigure.list', 't': 'Minifigures', 'i': 'group-line', 'f': 'HIDE_ALL_MINIFIGURES'}, # noqa: E501
{'e': 'instructions.list', 't': 'Instructions', 'i': 'file-line', 'f': 'HIDE_ALL_INSTRUCTIONS'}, # noqa: E501
{'e': 'storage.list', 't': 'Storages', 'i': 'archive-2-line', 'f': 'HIDE_ALL_STORAGES'}, # noqa: E501
{'e': 'statistics.overview', 't': 'Statistics', 'i': 'bar-chart-line', 'f': 'HIDE_STATISTICS'}, # noqa: E501
{'e': 'wish.list', 't': 'Wishlist', 'i': 'gift-line', 'f': 'HIDE_WISHES'},
{'e': 'admin.admin', 't': 'Admin', 'i': 'settings-4-line', 'f': 'HIDE_ADMIN'}, # noqa: E501
]
+52
View File
@@ -0,0 +1,52 @@
from flask import current_app, request
from typing import Any, Dict, Tuple
def get_pagination_config(entity_type: str) -> Tuple[int, bool]:
"""Get pagination configuration for an entity type (sets, parts, minifigures)"""
# Check if pagination is enabled for this specific entity type
pagination_key = f'{entity_type.upper()}_SERVER_SIDE_PAGINATION'
use_pagination = current_app.config.get(pagination_key, False)
if not use_pagination:
return 0, False
# Determine page size based on device type and entity
user_agent = request.headers.get('User-Agent', '').lower()
is_mobile = any(device in user_agent for device in ['mobile', 'android', 'iphone', 'ipad'])
# Get appropriate config keys based on entity type
entity_upper = entity_type.upper()
desktop_key = f'{entity_upper}_PAGINATION_SIZE_DESKTOP'
mobile_key = f'{entity_upper}_PAGINATION_SIZE_MOBILE'
per_page = current_app.config[mobile_key] if is_mobile else current_app.config[desktop_key]
return per_page, is_mobile
def build_pagination_context(page: int, per_page: int, total_count: int, is_mobile: bool) -> Dict[str, Any]:
"""Build pagination context for templates"""
total_pages = (total_count + per_page - 1) // per_page if total_count > 0 else 1
has_prev = page > 1
has_next = page < total_pages
return {
'page': page,
'per_page': per_page,
'total_count': total_count,
'total_pages': total_pages,
'has_prev': has_prev,
'has_next': has_next,
'is_mobile': is_mobile
}
def get_request_params() -> Tuple[str, str, str, int]:
"""Extract common request parameters for pagination"""
search_query = request.args.get('search', '').strip()
sort_field = request.args.get('sort', '')
sort_order = request.args.get('order', 'asc')
page = int(request.args.get('page', 1))
return search_query, sort_field, sort_order, page
+40 -18
View File
@@ -5,33 +5,55 @@ from .exceptions import ErrorException
def parse_set(set: str, /) -> str:
number, _, version = set.partition('-')
# Making sure both are integers
# Set number can be alphanumeric (e.g., "McDR6US", "10312", "COMCON035")
# Just validate it's not empty
if not number or number.strip() == '':
raise ErrorException('Set number cannot be empty')
# Clean up the number (trim whitespace)
number = number.strip()
# Version defaults to 1 if not provided
if version == '':
version = 1
version = '1'
# Version must be a valid number (but preserve leading zeros for minifigures)
try:
number = int(number)
except Exception:
raise ErrorException('Number "{number}" is not a number'.format(
number=number,
))
try:
version = int(version)
version_int = int(version)
except Exception:
raise ErrorException('Version "{version}" is not a number'.format(
version=version,
))
# Make sure both are positive
if number < 0:
raise ErrorException('Number "{number}" should be positive'.format(
number=number,
))
if version < 0:
raise ErrorException('Version "{version}" should be positive'.format( # noqa: E501
if version_int < 0:
raise ErrorException('Version "{version}" should be positive'.format(
version=version,
))
# Preserve original version string to keep leading zeros (important for minifigures like fig-000484)
return '{number}-{version}'.format(number=number, version=version)
# Make sense of string supposed to contain a minifigure ID
def parse_minifig(figure: str, /) -> str:
# Minifigure format is typically fig-XXXXXX
# We'll accept with or without the 'fig-' prefix
figure = figure.strip()
if not figure.startswith('fig-'):
# Try to add the prefix if it's just numbers
if figure.isdigit():
figure = 'fig-{figure}'.format(figure=figure.zfill(6))
else:
raise ErrorException('Minifigure "{figure}" must start with "fig-"'.format(
figure=figure,
))
# Validate format: fig-XXXXXX where X can be digits or letters
parts = figure.split('-')
if len(parts) != 2 or parts[0] != 'fig':
raise ErrorException('Invalid minifigure format "{figure}". Expected format: fig-XXXXXX'.format(
figure=figure,
))
return figure
+194 -19
View File
@@ -9,6 +9,7 @@ from .exceptions import ErrorException, NotFoundException
from .rebrickable_part import RebrickablePart
from .sql import BrickSQL
if TYPE_CHECKING:
from .individual_minifigure import IndividualMinifigure
from .minifigure import BrickMinifigure
from .set import BrickSet
from .socket import BrickSocket
@@ -23,6 +24,7 @@ class BrickPart(RebrickablePart):
# Queries
insert_query: str = 'part/insert'
update_on_refresh_query: str = 'part/update_on_refresh'
generic_query: str = 'part/select/generic'
select_query: str = 'part/select/specific'
@@ -32,6 +34,7 @@ class BrickPart(RebrickablePart):
*,
brickset: 'BrickSet | None' = None,
minifigure: 'BrickMinifigure | None' = None,
individual_minifigure: 'IndividualMinifigure | None' = None,
record: Row | dict[str, Any] | None = None
):
super().__init__(
@@ -40,7 +43,12 @@ class BrickPart(RebrickablePart):
record=record
)
if self.minifigure is not None:
self.individual_minifigure = individual_minifigure
if self.individual_minifigure is not None:
self.identifier = self.individual_minifigure.fields.id
self.kind = 'Individual Minifigure'
elif self.minifigure is not None:
self.identifier = self.minifigure.fields.figure
self.kind = 'Minifigure'
elif self.brickset is not None:
@@ -62,13 +70,35 @@ class BrickPart(RebrickablePart):
)
)
if not refresh:
# Insert into database
self.insert(commit=False)
# Insert the rebrickable set into database
# Insert the rebrickable part into database first (parent record)
# This must happen before inserting into bricktracker_parts due to FK constraint
self.insert_rebrickable()
if refresh:
params = self.sql_parameters()
# Track this part in the refresh temp table (for orphan cleanup later)
BrickSQL().execute(
'part/track_refresh_part',
parameters=params,
defer=False
)
# Try to update existing part first (preserves checked, missing, and damaged states)
# Note: Cannot defer this because we need to check if rows were affected
rows, _ = BrickSQL().execute(
self.update_on_refresh_query,
parameters=params,
defer=False
)
# If no rows were updated, the part doesn't exist yet, so insert it
if rows == 0:
self.insert(commit=False)
else:
# Insert into bricktracker_parts database (child record)
self.insert(commit=False)
except Exception as e:
socket.fail(
message='Error while importing part {part} from {kind} {identifier}: {error}'.format( # noqa: E501
@@ -159,6 +189,104 @@ class BrickPart(RebrickablePart):
return self
# Select a specific part from an individual minifigure instance
def select_specific_individual_minifigure(
self,
individual_minifigure: 'IndividualMinifigure',
part: str,
color: int,
spare: int,
/,
) -> Self:
# Save the parameters to the fields
self.individual_minifigure = individual_minifigure
self.fields.part = part
self.fields.color = color
self.fields.spare = spare
if not self.select(override_query='individual_minifigure/part/select/specific'):
raise NotFoundException(
'Part {part} with color {color} (spare: {spare}) from individual minifigure {id} was not found in the database'.format( # noqa: E501
part=self.fields.part,
color=self.fields.color,
spare=self.fields.spare,
id=individual_minifigure.fields.id,
),
)
return self
# Update checked state for part walkthrough
def update_checked(self, json: Any | None, /) -> bool:
# Handle both direct 'checked' key and changer.js 'value' key format
if json:
checked = json.get('checked', json.get('value', False))
else:
checked = False
checked = bool(checked)
# Update the field
self.fields.checked = checked
BrickSQL().execute_and_commit(
'part/update/checked',
parameters=self.sql_parameters()
)
return checked
# Update checked state for individual minifigure part
def update_checked_individual_minifigure(self, json: Any | None, /) -> bool:
# Handle both direct 'checked' key and changer.js 'value' key format
if json:
checked = json.get('checked', json.get('value', False))
else:
checked = False
checked = bool(checked)
self.fields.checked = checked
BrickSQL().execute_and_commit(
'individual_minifigure/part/update/checked',
parameters=self.sql_parameters()
)
return checked
# Compute the url for updating checked state
def url_for_checked(self, /) -> str:
# Different URL for individual minifigure part
if self.individual_minifigure is not None:
return url_for(
'individual_minifigure.checked_part',
id=self.individual_minifigure.fields.id,
part=self.fields.part,
color=self.fields.color,
spare=self.fields.spare,
)
# Different URL for a set minifigure part
elif self.minifigure is not None:
return url_for(
'set.checked_part',
id=self.fields.id,
figure=self.minifigure.fields.figure,
part=self.fields.part,
color=self.fields.color,
spare=self.fields.spare,
)
# Set part
else:
return url_for(
'set.checked_part',
id=self.fields.id,
figure=None,
part=self.fields.part,
color=self.fields.color,
spare=self.fields.spare,
)
# Update a problematic part
def update_problem(self, problem: str, json: Any | None, /) -> int:
amount: str | int = json.get('value', '') # type: ignore
@@ -189,20 +317,67 @@ class BrickPart(RebrickablePart):
return amount
# Update a problematic part for individual minifigure
def update_problem_individual_minifigure(self, problem: str, json: Any | None, /) -> int:
amount: str | int = json.get('value', '') # type: ignore
# We need a positive integer
try:
if amount == '':
amount = 0
amount = int(amount)
if amount < 0:
amount = 0
except Exception:
raise ErrorException('"{amount}" is not a valid integer'.format(
amount=amount
))
if amount < 0:
raise ErrorException('Cannot set a negative amount')
setattr(self.fields, problem, amount)
BrickSQL().execute_and_commit(
'individual_minifigure/part/update/{problem}'.format(problem=problem),
parameters=self.sql_parameters()
)
return amount
# Compute the url for problematic part
def url_for_problem(self, problem: str, /) -> str:
# Different URL for a minifigure part
if self.minifigure is not None:
figure = self.minifigure.fields.figure
# Different URL for individual minifigure part
if self.individual_minifigure is not None:
return url_for(
'individual_minifigure.problem_part',
id=self.individual_minifigure.fields.id,
part=self.fields.part,
color=self.fields.color,
spare=self.fields.spare,
problem=problem,
)
# Different URL for set minifigure part
elif self.minifigure is not None:
return url_for(
'set.problem_part',
id=self.fields.id,
figure=self.minifigure.fields.figure,
part=self.fields.part,
color=self.fields.color,
spare=self.fields.spare,
problem=problem,
)
# Set part
else:
figure = None
return url_for(
'set.problem_part',
id=self.fields.id,
figure=figure,
part=self.fields.part,
color=self.fields.color,
spare=self.fields.spare,
problem=problem,
return url_for(
'set.problem_part',
id=self.fields.id,
figure=None,
part=self.fields.part,
color=self.fields.color,
spare=self.fields.spare,
problem=problem,
)
+230 -8
View File
@@ -19,6 +19,7 @@ logger = logging.getLogger(__name__)
class BrickPartList(BrickRecordList[BrickPart]):
brickset: 'BrickSet | None'
minifigure: 'BrickMinifigure | None'
individual_minifigure: 'IndividualMinifigure | None'
order: str
# Queries
@@ -57,8 +58,8 @@ class BrickPartList(BrickRecordList[BrickPart]):
return self
# Load all parts with filters (owner and/or color)
def all_filtered(self, owner_id: str | None = None, color_id: str | None = None, /) -> Self:
# Load all parts with filters (owner, color, theme, year, individuals)
def all_filtered(self, owner_id: str | None = None, color_id: str | None = None, theme_id: str | None = None, year: str | None = None, individuals_filter: str | None = None, /) -> Self:
# Save the filter parameters
if owner_id is not None:
self.fields.owner_id = owner_id
@@ -71,11 +72,81 @@ class BrickPartList(BrickRecordList[BrickPart]):
else:
query = self.all_query
# Prepare context for query
context = {}
# Hide spare parts from display if configured
if current_app.config.get('HIDE_SPARE_PARTS', False):
context['skip_spare_parts'] = True
if theme_id and theme_id != 'all':
context['theme_id'] = theme_id
if year and year != 'all':
context['year'] = year
if individuals_filter and individuals_filter == 'only':
context['individuals_filter'] = True
# Load the parts from the database
self.list(override_query=query)
self.list(override_query=query, **context)
return self
# Load parts with pagination support
def all_filtered_paginated(
self,
owner_id: str | None = None,
color_id: str | None = None,
theme_id: str | None = None,
year: str | None = None,
individuals_filter: str | None = None,
search_query: str | None = None,
page: int = 1,
per_page: int = 50,
sort_field: str | None = None,
sort_order: str = 'asc'
) -> tuple[Self, int]:
# Prepare filter context
filter_context = {}
if owner_id and owner_id != 'all':
filter_context['owner_id'] = owner_id
list_query = self.all_by_owner_query
else:
list_query = self.all_query
if color_id and color_id != 'all':
filter_context['color_id'] = color_id
if theme_id and theme_id != 'all':
filter_context['theme_id'] = theme_id
if year and year != 'all':
filter_context['year'] = year
if individuals_filter and individuals_filter == 'only':
filter_context['individuals_filter'] = True
if search_query:
filter_context['search_query'] = search_query
# Hide spare parts from display if configured
if current_app.config.get('HIDE_SPARE_PARTS', False):
filter_context['skip_spare_parts'] = True
# Field mapping for sorting
field_mapping = {
'name': '"rebrickable_parts"."name"',
'color': '"rebrickable_parts"."color_name"',
'quantity': '"total_quantity"',
'missing': '"total_missing"',
'damaged': '"total_damaged"',
'sets': '"total_sets"',
'minifigures': '"total_minifigures"'
}
# Use the base pagination method
return self.paginate(
page=page,
per_page=per_page,
sort_field=sort_field,
sort_order=sort_order,
list_query=list_query,
field_mapping=field_mapping,
**filter_context
)
# Base part list
def list(
self,
@@ -84,6 +155,7 @@ class BrickPartList(BrickRecordList[BrickPart]):
override_query: str | None = None,
order: str | None = None,
limit: int | None = None,
offset: int | None = None,
**context: Any,
) -> None:
if order is None:
@@ -99,29 +171,38 @@ class BrickPartList(BrickRecordList[BrickPart]):
else:
minifigure = None
if hasattr(self, 'individual_minifigure'):
individual_minifigure = self.individual_minifigure
else:
individual_minifigure = None
# Prepare template context for filtering
context_vars = {}
if hasattr(self.fields, 'owner_id') and self.fields.owner_id is not None:
context_vars['owner_id'] = self.fields.owner_id
if hasattr(self.fields, 'color_id') and self.fields.color_id is not None:
context_vars['color_id'] = self.fields.color_id
if hasattr(self.fields, 'search_query') and self.fields.search_query:
context_vars['search_query'] = self.fields.search_query
# Merge with any additional context passed in
context_vars.update(context)
# Load the sets from the database
for record in super().select(
override_query=override_query,
order=order,
limit=limit,
offset=offset,
**context_vars
):
part = BrickPart(
brickset=brickset,
minifigure=minifigure,
individual_minifigure=individual_minifigure,
record=record,
)
if current_app.config['SKIP_SPARE_PARTS'] and part.fields.spare:
continue
self.records.append(part)
# List specific parts from a brickset or minifigure
@@ -136,8 +217,13 @@ class BrickPartList(BrickRecordList[BrickPart]):
self.brickset = brickset
self.minifigure = minifigure
# Prepare context for hiding spare parts if configured
context = {}
if current_app.config.get('HIDE_SPARE_PARTS', False):
context['skip_spare_parts'] = True
# Load the parts from the database
self.list()
self.list(**context)
return self
@@ -150,8 +236,31 @@ class BrickPartList(BrickRecordList[BrickPart]):
# Save the minifigure
self.minifigure = minifigure
# Prepare context for hiding spare parts if configured
context = {}
if current_app.config.get('HIDE_SPARE_PARTS', False):
context['skip_spare_parts'] = True
# Load the parts from the database
self.list(override_query=self.minifigure_query)
self.list(override_query=self.minifigure_query, **context)
return self
# Load parts from an individual minifigure instance
def from_individual_minifigure(
self,
individual_minifigure: 'IndividualMinifigure',
/,
) -> Self:
from .individual_minifigure import IndividualMinifigure
# Save the individual minifigure reference
self.individual_minifigure = individual_minifigure
# Load the parts for this individual minifigure instance
self.list(
override_query='individual_minifigure/part/list/from_instance'
)
return self
@@ -175,12 +284,115 @@ class BrickPartList(BrickRecordList[BrickPart]):
return self
# Last added parts
def last(self, /, *, limit: int = 6) -> Self:
if current_app.config['RANDOM']:
order = 'RANDOM()'
else:
# Since bricktracker_parts has a composite primary key, it doesn't have a rowid
# Order by id DESC (which are UUIDs with timestamps) to get recent parts
order = '"combined"."id" DESC, "combined"."part" ASC'
context = {}
if current_app.config.get('HIDE_SPARE_PARTS', False):
context['skip_spare_parts'] = True
self.list(override_query=self.last_query, order=order, limit=limit, **context)
return self
# Load problematic parts
def problem(self, /) -> Self:
self.list(override_query=self.problem_query)
return self
def problem_filtered(self, owner_id: str | None = None, color_id: str | None = None, theme_id: str | None = None, year: str | None = None, storage_id: str | None = None, tag_id: str | None = None, /) -> Self:
# Save the filter parameters for client-side filtering
if owner_id is not None:
self.fields.owner_id = owner_id
if color_id is not None:
self.fields.color_id = color_id
# Prepare context for query
context = {}
if owner_id and owner_id != 'all':
context['owner_id'] = owner_id
if color_id and color_id != 'all':
context['color_id'] = color_id
if theme_id and theme_id != 'all':
context['theme_id'] = theme_id
if year and year != 'all':
context['year'] = year
if storage_id and storage_id != 'all':
context['storage_id'] = storage_id
if tag_id and tag_id != 'all':
context['tag_id'] = tag_id
# Hide spare parts from display if configured
if current_app.config.get('HIDE_SPARE_PARTS', False):
context['skip_spare_parts'] = True
# Load the problematic parts from the database
self.list(override_query=self.problem_query, **context)
return self
def problem_paginated(
self,
owner_id: str | None = None,
color_id: str | None = None,
theme_id: str | None = None,
year: str | None = None,
storage_id: str | None = None,
tag_id: str | None = None,
search_query: str | None = None,
page: int = 1,
per_page: int = 50,
sort_field: str | None = None,
sort_order: str = 'asc'
) -> tuple[Self, int]:
# Prepare filter context
filter_context = {}
if owner_id and owner_id != 'all':
filter_context['owner_id'] = owner_id
if color_id and color_id != 'all':
filter_context['color_id'] = color_id
if theme_id and theme_id != 'all':
filter_context['theme_id'] = theme_id
if year and year != 'all':
filter_context['year'] = year
if storage_id and storage_id != 'all':
filter_context['storage_id'] = storage_id
if tag_id and tag_id != 'all':
filter_context['tag_id'] = tag_id
if search_query:
filter_context['search_query'] = search_query
# Hide spare parts from display if configured
if current_app.config.get('HIDE_SPARE_PARTS', False):
filter_context['skip_spare_parts'] = True
# Field mapping for sorting
field_mapping = {
'name': '"rebrickable_parts"."name"',
'color': '"rebrickable_parts"."color_name"',
'quantity': '"total_quantity"',
'missing': '"total_missing"',
'damaged': '"total_damaged"',
'sets': '"total_sets"',
'minifigures': '"total_minifigures"'
}
# Use the base pagination method with problem query
return self.paginate(
page=page,
per_page=per_page,
sort_field=sort_field,
sort_order=sort_order,
list_query=self.problem_query,
field_mapping=field_mapping,
**filter_context
)
# Return a dict with common SQL parameters for a parts list
def sql_parameters(self, /) -> dict[str, Any]:
parameters: dict[str, Any] = super().sql_parameters()
@@ -189,6 +401,10 @@ class BrickPartList(BrickRecordList[BrickPart]):
if self.brickset is not None:
parameters['id'] = self.brickset.fields.id
# Use the individual minifigure ID if present
if hasattr(self, 'individual_minifigure') and self.individual_minifigure is not None:
parameters['id'] = self.individual_minifigure.fields.id
# Use the minifigure number if present,
if self.minifigure is not None:
parameters['figure'] = self.minifigure.fields.figure
@@ -256,7 +472,13 @@ class BrickPartList(BrickRecordList[BrickPart]):
# Process each part
number_of_parts: int = 0
skip_spares = current_app.config.get('SKIP_SPARE_PARTS', False)
for part in inventory:
# Skip spare parts if configured
if skip_spares and part.fields.spare:
continue
# Count the number of parts for minifigures
if minifigure is not None:
number_of_parts += part.fields.quantity
+436
View File
@@ -0,0 +1,436 @@
import hashlib
import logging
import os
from pathlib import Path
import time
from typing import Any, NamedTuple, TYPE_CHECKING
from urllib.parse import urljoin
from bs4 import BeautifulSoup
from flask import current_app, url_for
import requests
from .exceptions import ErrorException
if TYPE_CHECKING:
from .socket import BrickSocket
logger = logging.getLogger(__name__)
def get_peeron_user_agent():
"""Get the User-Agent string for Peeron requests from config"""
return current_app.config.get('REBRICKABLE_USER_AGENT',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36')
def get_peeron_download_delay():
"""Get the delay in milliseconds between Peeron page downloads from config"""
return current_app.config.get('PEERON_DOWNLOAD_DELAY', 1000)
def get_min_image_size():
"""Get the minimum image size for valid Peeron instruction pages from config"""
return current_app.config.get('PEERON_MIN_IMAGE_SIZE', 100)
def get_peeron_instruction_url(set_number: str, version_number: str):
"""Get the Peeron instruction page URL using the configured pattern"""
pattern = current_app.config.get('PEERON_INSTRUCTION_PATTERN', 'http://peeron.com/scans/{set_number}-{version_number}')
return pattern.format(set_number=set_number, version_number=version_number)
def get_peeron_thumbnail_url(set_number: str, version_number: str):
"""Get the Peeron thumbnail base URL using the configured pattern"""
pattern = current_app.config.get('PEERON_THUMBNAIL_PATTERN', 'http://belay.peeron.com/thumbs/{set_number}-{version_number}/')
return pattern.format(set_number=set_number, version_number=version_number)
def get_peeron_scan_url(set_number: str, version_number: str):
"""Get the Peeron scan base URL using the configured pattern"""
pattern = current_app.config.get('PEERON_SCAN_PATTERN', 'http://belay.peeron.com/scans/{set_number}-{version_number}/')
return pattern.format(set_number=set_number, version_number=version_number)
def create_peeron_scraper():
"""Create a requests session configured for Peeron"""
session = requests.Session()
session.headers.update({
"User-Agent": get_peeron_user_agent()
})
return session
def get_peeron_cache_dir():
"""Get the base directory for Peeron caching"""
static_dir = Path(current_app.static_folder)
cache_dir = static_dir / 'images' / 'peeron_cache'
cache_dir.mkdir(parents=True, exist_ok=True)
return cache_dir
def get_set_cache_dir(set_number: str, version_number: str) -> tuple[Path, Path]:
"""Get cache directories for a specific set"""
base_cache_dir = get_peeron_cache_dir()
set_cache_key = f"{set_number}-{version_number}"
full_cache_dir = base_cache_dir / 'full' / set_cache_key
thumb_cache_dir = base_cache_dir / 'thumbs' / set_cache_key
full_cache_dir.mkdir(parents=True, exist_ok=True)
thumb_cache_dir.mkdir(parents=True, exist_ok=True)
return full_cache_dir, thumb_cache_dir
def cache_full_image_and_generate_thumbnail(image_url: str, page_number: str, set_number: str, version_number: str, session=None) -> tuple[str | None, str | None]:
"""
Download and cache full-size image, then generate a thumbnail preview.
Uses the full-size scan URLs from Peeron.
Returns (cached_image_path, thumbnail_url) or (None, None) if caching fails.
"""
try:
full_cache_dir, thumb_cache_dir = get_set_cache_dir(set_number, version_number)
full_filename = f"{page_number}.jpg"
thumb_filename = f"{page_number}.jpg"
full_cache_path = full_cache_dir / full_filename
thumb_cache_path = thumb_cache_dir / thumb_filename
# Return existing cached files if they exist
if full_cache_path.exists() and thumb_cache_path.exists():
set_cache_key = f"{set_number}-{version_number}"
thumbnail_url = url_for('static', filename=f'images/peeron_cache/thumbs/{set_cache_key}/{thumb_filename}')
return str(full_cache_path), thumbnail_url
# Download the full-size image using provided session or create new one
if session is None:
session = create_peeron_scraper()
response = session.get(image_url, timeout=30)
if response.status_code == 200 and len(response.content) > 0:
# Validate it's actually an image by checking minimum size
min_size = get_min_image_size()
if len(response.content) < min_size:
logger.warning(f"Image too small, skipping cache: {image_url}")
return None, None
# Write full-size image to cache
with open(full_cache_path, 'wb') as f:
f.write(response.content)
logger.debug(f"Cached full image: {image_url} -> {full_cache_path}")
# Generate thumbnail from the cached full image
try:
from PIL import Image
with Image.open(full_cache_path) as img:
# Create thumbnail (max 150px on longest side to match template)
img.thumbnail((150, 150), Image.Resampling.LANCZOS)
img.save(thumb_cache_path, 'JPEG', quality=85)
logger.debug(f"Generated thumbnail: {full_cache_path} -> {thumb_cache_path}")
set_cache_key = f"{set_number}-{version_number}"
thumbnail_url = url_for('static', filename=f'images/peeron_cache/thumbs/{set_cache_key}/{thumb_filename}')
return str(full_cache_path), thumbnail_url
except Exception as thumb_error:
logger.error(f"Failed to generate thumbnail for {page_number}: {thumb_error}")
# Clean up the full image if thumbnail generation failed
if full_cache_path.exists():
full_cache_path.unlink()
return None, None
else:
logger.warning(f"Failed to download full image: {image_url}")
return None, None
except Exception as e:
logger.error(f"Error caching full image {image_url}: {e}")
return None, None
def clear_set_cache(set_number: str, version_number: str) -> int:
"""
Clear all cached files for a specific set after PDF generation.
Returns the number of files deleted.
"""
try:
full_cache_dir, thumb_cache_dir = get_set_cache_dir(set_number, version_number)
deleted_count = 0
# Delete full images
if full_cache_dir.exists():
for cache_file in full_cache_dir.glob('*.jpg'):
try:
cache_file.unlink()
deleted_count += 1
logger.debug(f"Deleted cached full image: {cache_file}")
except OSError as e:
logger.warning(f"Failed to delete cache file {cache_file}: {e}")
# Remove directory if empty
try:
full_cache_dir.rmdir()
except OSError:
pass # Directory not empty or other error
# Delete thumbnails
if thumb_cache_dir.exists():
for cache_file in thumb_cache_dir.glob('*.jpg'):
try:
cache_file.unlink()
deleted_count += 1
logger.debug(f"Deleted cached thumbnail: {cache_file}")
except OSError as e:
logger.warning(f"Failed to delete cache file {cache_file}: {e}")
# Remove directory if empty
try:
thumb_cache_dir.rmdir()
except OSError:
pass # Directory not empty or other error
# Try to remove set directory if empty
try:
set_cache_key = f"{set_number}-{version_number}"
full_cache_dir.parent.rmdir() if full_cache_dir.parent.name == set_cache_key else None
thumb_cache_dir.parent.rmdir() if thumb_cache_dir.parent.name == set_cache_key else None
except OSError:
pass # Directory not empty or other error
logger.info(f"Set cache cleanup completed for {set_number}-{version_number}: {deleted_count} files deleted")
return deleted_count
except Exception as e:
logger.error(f"Error during set cache cleanup for {set_number}-{version_number}: {e}")
return 0
def clear_old_cache(max_age_days: int = 7) -> int:
"""
Clear old cache files across all sets.
Returns the number of files deleted.
"""
try:
base_cache_dir = get_peeron_cache_dir()
if not base_cache_dir.exists():
return 0
deleted_count = 0
max_age_seconds = max_age_days * 24 * 60 * 60
current_time = time.time()
# Clean both full and thumbs directories
for cache_type in ['full', 'thumbs']:
cache_type_dir = base_cache_dir / cache_type
if cache_type_dir.exists():
for set_dir in cache_type_dir.iterdir():
if set_dir.is_dir():
for cache_file in set_dir.glob('*.jpg'):
file_age = current_time - os.path.getmtime(cache_file)
if file_age > max_age_seconds:
try:
cache_file.unlink()
deleted_count += 1
logger.debug(f"Deleted old cache file: {cache_file}")
except OSError as e:
logger.warning(f"Failed to delete cache file {cache_file}: {e}")
# Remove empty directories
try:
if not any(set_dir.iterdir()):
set_dir.rmdir()
except OSError:
pass
logger.info(f"Old cache cleanup completed: {deleted_count} files deleted")
return deleted_count
except Exception as e:
logger.error(f"Error during old cache cleanup: {e}")
return 0
class PeeronPage(NamedTuple):
"""Represents a single instruction page from Peeron"""
page_number: str
original_image_url: str # Original Peeron full-size image URL
cached_full_image_path: str # Local full-size cached image path
cached_thumbnail_url: str # Local thumbnail URL for preview
alt_text: str
rotation: int = 0 # Rotation in degrees (0, 90, 180, 270)
# Peeron instruction scraper
class PeeronInstructions(object):
socket: 'BrickSocket | None'
set_number: str
version_number: str
pages: list[PeeronPage]
def __init__(
self,
set_number: str,
version_number: str = '1',
/,
*,
socket: 'BrickSocket | None' = None,
):
# Save the socket
self.socket = socket
# Parse set number (handle both "4011" and "4011-1" formats)
if '-' in set_number:
parts = set_number.split('-', 1)
self.set_number = parts[0]
self.version_number = parts[1] if len(parts) > 1 else '1'
else:
self.set_number = set_number
self.version_number = version_number
# Placeholder for pages
self.pages = []
# Check if instructions exist on Peeron (lightweight)
def exists(self, /) -> bool:
"""Check if the set exists on Peeron without caching thumbnails"""
try:
base_url = get_peeron_instruction_url(self.set_number, self.version_number)
scraper = create_peeron_scraper()
response = scraper.get(base_url)
if response.status_code != 200:
return False
soup = BeautifulSoup(response.text, 'html.parser')
# Check for "Browse instruction library" header (set not found)
if soup.find('h1', string="Browse instruction library"):
return False
# Look for thumbnail images to confirm instructions exist
thumbnails = soup.select('table[cellspacing="5"] a img[src^="http://belay.peeron.com/thumbs/"]')
return len(thumbnails) > 0
except Exception:
return False
# Find all available instruction pages on Peeron
def find_pages(self, /) -> list[PeeronPage]:
"""
Scrape Peeron's HTML and return a list of available instruction pages.
Similar to BrickInstructions.find_instructions() but for Peeron.
"""
base_url = get_peeron_instruction_url(self.set_number, self.version_number)
thumb_base_url = get_peeron_thumbnail_url(self.set_number, self.version_number)
scan_base_url = get_peeron_scan_url(self.set_number, self.version_number)
logger.debug(f"[find_pages] fetching HTML from {base_url!r}")
# Set up session with persistent cookies for Peeron (like working dl_peeron.py)
scraper = create_peeron_scraper()
# Download the main HTML page to establish session and cookies
try:
logger.debug(f"[find_pages] Establishing session by visiting: {base_url}")
response = scraper.get(base_url)
logger.debug(f"[find_pages] Main page visit: HTTP {response.status_code}")
if response.status_code != 200:
raise ErrorException(f'Failed to load Peeron page for {self.set_number}-{self.version_number}. HTTP {response.status_code}')
except requests.exceptions.RequestException as e:
raise ErrorException(f'Failed to connect to Peeron: {e}')
# Parse HTML to locate instruction pages
soup = BeautifulSoup(response.text, 'html.parser')
# Check for "Browse instruction library" header (set not found)
if soup.find('h1', string="Browse instruction library"):
raise ErrorException(f'Set {self.set_number}-{self.version_number} not found on Peeron')
# Locate all thumbnail images in the expected table structure
# Use the configured thumbnail pattern to build the expected URL prefix
thumb_base_url = get_peeron_thumbnail_url(self.set_number, self.version_number)
thumbnails = soup.select(f'table[cellspacing="5"] a img[src^="{thumb_base_url}"]')
if not thumbnails:
raise ErrorException(f'No instruction pages found for {self.set_number}-{self.version_number} on Peeron')
pages: list[PeeronPage] = []
total_thumbnails = len(thumbnails)
# Initialize progress if socket is available
if self.socket:
self.socket.progress_total = total_thumbnails
self.socket.progress_count = 0
self.socket.progress(message=f"Starting to cache {total_thumbnails} full images")
for idx, img in enumerate(thumbnails, 1):
thumb_url = img['src']
# Extract the page number from the thumbnail URL
page_number = thumb_url.split('/')[-2]
# Build the full-size scan URL using the page number
full_size_url = f"{scan_base_url}{page_number}/"
logger.debug(f"[find_pages] Page {page_number}: thumb={thumb_url}, full_size={full_size_url}")
# Create alt text for the page
alt_text = f"LEGO Instructions {self.set_number}-{self.version_number} Page {page_number}"
# Report progress if socket is available
if self.socket:
self.socket.progress_count = idx
self.socket.progress(message=f"Caching full image {idx} of {total_thumbnails}")
# Cache the full-size image and generate thumbnail preview using established session
cached_full_path, cached_thumb_url = cache_full_image_and_generate_thumbnail(
full_size_url, page_number, self.set_number, self.version_number, session=scraper
)
# Skip this page if caching failed
if not cached_full_path or not cached_thumb_url:
logger.warning(f"[find_pages] Skipping page {page_number} due to caching failure")
continue
page = PeeronPage(
page_number=page_number,
original_image_url=full_size_url,
cached_full_image_path=cached_full_path,
cached_thumbnail_url=cached_thumb_url,
alt_text=alt_text
)
pages.append(page)
# Cache the pages for later use
self.pages = pages
logger.debug(f"[find_pages] found {len(pages)} pages for {self.set_number}-{self.version_number}")
return pages
# Find instructions with fallback to Peeron
@staticmethod
def find_instructions_with_peeron_fallback(set: str, /) -> tuple[list[tuple[str, str]], list[PeeronPage] | None]:
"""
Enhanced version of BrickInstructions.find_instructions() that falls back to Peeron.
Returns (rebrickable_instructions, peeron_pages).
If rebrickable_instructions is empty, peeron_pages will contain Peeron data.
"""
from .instructions import BrickInstructions
# First try Rebrickable
try:
rebrickable_instructions = BrickInstructions.find_instructions(set)
return rebrickable_instructions, None
except ErrorException as e:
logger.info(f"Rebrickable failed for {set}: {e}. Trying Peeron fallback...")
# Fallback to Peeron
try:
peeron = PeeronInstructions(set)
peeron_pages = peeron.find_pages()
return [], peeron_pages
except ErrorException as peeron_error:
# Both failed, re-raise original Rebrickable error
logger.info(f"Peeron also failed for {set}: {peeron_error}")
raise e from peeron_error
+204
View File
@@ -0,0 +1,204 @@
import logging
import os
import tempfile
import time
from typing import Any, TYPE_CHECKING
from flask import current_app
from PIL import Image
from .exceptions import DownloadException, ErrorException
from .instructions import BrickInstructions
from .peeron_instructions import PeeronPage, get_min_image_size, get_peeron_download_delay, get_peeron_instruction_url, create_peeron_scraper
if TYPE_CHECKING:
from .socket import BrickSocket
logger = logging.getLogger(__name__)
# PDF generator for Peeron instruction pages
class PeeronPDF(object):
socket: 'BrickSocket'
set_number: str
version_number: str
pages: list[PeeronPage]
filename: str
def __init__(
self,
set_number: str,
version_number: str,
pages: list[PeeronPage],
/,
*,
socket: 'BrickSocket',
):
# Save the socket
self.socket = socket
# Save set information
self.set_number = set_number
self.version_number = version_number
self.pages = pages
# Generate filename following BrickTracker conventions
self.filename = f"{set_number}-{version_number}_peeron.pdf"
# Download pages and create PDF
def create_pdf(self, /) -> None:
"""
Downloads selected Peeron pages and merges them into a PDF.
Uses progress updates via socket similar to BrickInstructions.download()
"""
try:
target_path = self._get_target_path()
# Skip if we already have it
if os.path.isfile(target_path):
# Create BrickInstructions instance to get PDF URL
instructions = BrickInstructions(self.filename)
pdf_url = instructions.url()
return self.socket.complete(
message=f'File {self.filename} already exists, skipped - <a href="{pdf_url}" target="_blank" class="btn btn-sm btn-primary ms-2"><i class="ri-external-link-line"></i> Open PDF</a>'
)
# Set up progress tracking
total_pages = len(self.pages)
self.socket.update_total(total_pages)
self.socket.progress_count = 0
self.socket.progress(message=f"Starting PDF creation from {total_pages} cached pages")
# Use cached images directly - no downloads needed!
cached_files_with_rotation = []
missing_pages = []
for i, page in enumerate(self.pages):
# Check if cached file exists
if os.path.isfile(page.cached_full_image_path):
cached_files_with_rotation.append((page.cached_full_image_path, page.rotation))
# Update progress
self.socket.progress_count += 1
self.socket.progress(
message=f"Processing cached page {page.page_number} ({i + 1}/{total_pages})"
)
else:
missing_pages.append(page.page_number)
logger.warning(f"Cached image missing for page {page.page_number}: {page.cached_full_image_path}")
if not cached_files_with_rotation:
raise DownloadException(f"No cached images available for set {self.set_number}-{self.version_number}. Cache may have been cleared.")
elif len(cached_files_with_rotation) < total_pages:
# Partial success
error_msg = f"Only found {len(cached_files_with_rotation)}/{total_pages} cached images."
if missing_pages:
error_msg += f" Missing pages: {', '.join(missing_pages)}."
logger.warning(error_msg)
# Create PDF from cached images with rotation
self._create_pdf_from_images(cached_files_with_rotation, target_path)
# Success
logger.info(f"Created PDF {self.filename} with {len(cached_files_with_rotation)} pages")
# Create BrickInstructions instance to get PDF URL
instructions = BrickInstructions(self.filename)
pdf_url = instructions.url()
self.socket.complete(
message=f'PDF {self.filename} created with {len(cached_files_with_rotation)} pages - <a href="{pdf_url}" target="_blank" class="btn btn-sm btn-primary ms-2"><i class="ri-external-link-line"></i> Open PDF</a>'
)
# Clean up set cache after successful PDF creation
try:
from .peeron_instructions import clear_set_cache
deleted_count = clear_set_cache(self.set_number, self.version_number)
if deleted_count > 0:
logger.info(f"[create_pdf] Cleaned up {deleted_count} cache files for set {self.set_number}-{self.version_number}")
except Exception as e:
logger.warning(f"[create_pdf] Failed to clean set cache: {e}")
except Exception as e:
logger.error(f"Error creating PDF {self.filename}: {e}")
self.socket.fail(
message=f"Error creating PDF {self.filename}: {e}"
)
# Create PDF from downloaded images
def _create_pdf_from_images(self, image_paths_and_rotations: list[tuple[str, int]], output_path: str, /) -> None:
"""Create a PDF from a list of image files with their rotations"""
try:
# Import FPDF (should be available from requirements)
from fpdf import FPDF
except ImportError:
raise ErrorException("FPDF library not available. Install with: pip install fpdf2")
pdf = FPDF()
for i, (img_path, rotation) in enumerate(image_paths_and_rotations):
try:
# Open image and apply rotation if needed
with Image.open(img_path) as image:
# Apply rotation if specified
if rotation != 0:
# PIL rotation is counter-clockwise, so we negate for clockwise rotation
image = image.rotate(-rotation, expand=True)
width, height = image.size
# Add page with image dimensions (convert pixels to mm)
# 1 pixel = 0.264583 mm (assuming 96 DPI)
page_width = width * 0.264583
page_height = height * 0.264583
pdf.add_page(format=(page_width, page_height))
# Save rotated image to temporary file for FPDF
temp_rotated_path = None
if rotation != 0:
import tempfile
temp_fd, temp_rotated_path = tempfile.mkstemp(suffix='.jpg', prefix=f'peeron_rotated_{i}_')
try:
os.close(temp_fd) # Close file descriptor, we'll use the path
image.save(temp_rotated_path, 'JPEG', quality=95)
pdf.image(temp_rotated_path, x=0, y=0, w=page_width, h=page_height)
finally:
# Clean up rotated temp file
if temp_rotated_path and os.path.exists(temp_rotated_path):
os.remove(temp_rotated_path)
else:
pdf.image(img_path, x=0, y=0, w=page_width, h=page_height)
# Update progress
progress_msg = f"Processing page {i + 1}/{len(image_paths_and_rotations)} into PDF"
if rotation != 0:
progress_msg += f" (rotated {rotation}°)"
self.socket.progress(message=progress_msg)
except Exception as e:
logger.warning(f"Failed to add image {img_path} to PDF: {e}")
continue
# Save the PDF
pdf.output(output_path)
# Get target file path
def _get_target_path(self, /) -> str:
"""Get the full path where the PDF should be saved"""
folder = current_app.config['INSTRUCTIONS_FOLDER']
# If folder is absolute, use it directly
# Otherwise, make it relative to app root (not static folder)
if os.path.isabs(folder):
instructions_folder = folder
else:
instructions_folder = os.path.join(current_app.root_path, folder)
return os.path.join(instructions_folder, self.filename)
# Create BrickInstructions instance for the generated PDF
def get_instructions(self, /) -> BrickInstructions:
"""Return a BrickInstructions instance for the generated PDF"""
return BrickInstructions(self.filename)
+37 -14
View File
@@ -53,8 +53,9 @@ class RebrickableImage(object):
if os.path.exists(path):
return
# Get the URL (this handles nil images via url() method)
url = self.url()
if url is None:
if not url:
return
# Grab the image
@@ -87,7 +88,7 @@ class RebrickableImage(object):
return self.part.fields.image_id
if self.minifigure is not None:
if self.minifigure.fields.image is None:
if not self.minifigure.fields.image:
return RebrickableImage.nil_minifigure_name()
else:
return self.minifigure.fields.figure
@@ -96,27 +97,38 @@ class RebrickableImage(object):
# Return the path depending on the objects provided
def path(self, /) -> str:
folder = self.folder()
# If folder is an absolute path (starts with /), use it directly
# Otherwise, make it relative to app root (current_app.root_path)
if folder.startswith('/'):
base_path = folder
else:
base_path = os.path.join(current_app.root_path, folder)
return os.path.join(
current_app.static_folder, # type: ignore
self.folder(),
base_path,
'{id}.{ext}'.format(id=self.id(), ext=self.extension),
)
# Return the url depending on the objects provided
def url(self, /) -> str:
if self.part is not None:
if self.part.fields.image is None:
if not self.part.fields.image:
return current_app.config['REBRICKABLE_IMAGE_NIL']
else:
return self.part.fields.image
if self.minifigure is not None:
if self.minifigure.fields.image is None:
if not self.minifigure.fields.image:
return current_app.config['REBRICKABLE_IMAGE_NIL_MINIFIGURE']
else:
return self.minifigure.fields.image
return self.set.fields.image
# Handle set images - use nil placeholder if image is null
if self.set.fields.image is None:
return current_app.config['REBRICKABLE_IMAGE_NIL']
else:
return self.set.fields.image
# Return the name of the nil image file
@staticmethod
@@ -152,10 +164,21 @@ class RebrickableImage(object):
# _, extension = os.path.splitext(self.part_img_url)
extension = '.jpg'
# Compute the path
path = os.path.join(folder, '{name}{ext}'.format(
name=name,
ext=extension,
))
return url_for('static', filename=path)
# Determine which route to use based on folder path
# If folder contains 'data' (new structure), use data route
# Otherwise use static route (legacy - relative paths like 'parts', 'sets')
if 'data' in folder:
# Extract the folder type from the folder_name config key
# E.g., 'PARTS_FOLDER' -> 'parts', 'SETS_FOLDER' -> 'sets'
folder_type = folder_name.replace('_FOLDER', '').lower()
filename = '{name}{ext}'.format(name=name, ext=extension)
return url_for('data.serve_data_file', folder=folder_type, filename=filename)
else:
# Legacy: folder is relative to static/ (e.g., 'parts' or 'static/parts')
# Strip 'static/' prefix if present to avoid double /static/ in URL
folder_clean = folder.removeprefix('static/')
path = os.path.join(folder_clean, '{name}{ext}'.format(
name=name,
ext=extension,
))
return url_for('static', filename=path)
+10 -7
View File
@@ -14,7 +14,6 @@ if TYPE_CHECKING:
class RebrickableMinifigure(BrickRecord):
brickset: 'BrickSet | None'
# Queries
select_query: str = 'rebrickable/minifigure/select'
insert_query: str = 'rebrickable/minifigure/insert'
@@ -27,10 +26,8 @@ class RebrickableMinifigure(BrickRecord):
):
super().__init__()
# Save the brickset
self.brickset = brickset
# Ingest the record if it has one
if record is not None:
self.ingest(record)
@@ -62,7 +59,6 @@ class RebrickableMinifigure(BrickRecord):
return parameters
# Self url
def url(self, /) -> str:
return url_for(
'minifigure.details',
@@ -89,17 +85,24 @@ class RebrickableMinifigure(BrickRecord):
if current_app.config['REBRICKABLE_LINKS']:
try:
return current_app.config['REBRICKABLE_LINK_MINIFIGURE_PATTERN'].format( # noqa: E501
number=self.fields.figure,
figure=self.fields.figure,
)
except Exception:
pass
return ''
# Compute the url for the bricklink page
# Note: BrickLink uses different minifigure IDs than Rebrickable (e.g., 'adv010' vs 'fig-000359')
# Rebrickable API doesn't provide BrickLink minifigure IDs, so we can't generate valid links
def url_for_bricklink(self, /) -> str:
# BrickLink links disabled for minifigures - no ID mapping available
# Left function for later, if I find a way to implement it.
return ''
# Normalize from Rebrickable
@staticmethod
def from_rebrickable(data: dict[str, Any], /, **_) -> dict[str, Any]:
# Extracting number
number = int(str(data['set_num'])[5:])
return {
@@ -107,5 +110,5 @@ class RebrickableMinifigure(BrickRecord):
'number': int(number),
'name': str(data['set_name']),
'quantity': int(data['quantity']),
'image': data['set_img_url'],
'image': str(data['set_img_url']) if data['set_img_url'] else None,
}
+5 -2
View File
@@ -67,8 +67,11 @@ class RebrickablePart(BrickRecord):
def sql_parameters(self, /) -> dict[str, Any]:
parameters = super().sql_parameters()
# Individual minifigure id takes precedence
if hasattr(self, 'individual_minifigure') and self.individual_minifigure is not None:
parameters['id'] = self.individual_minifigure.fields.id
# Set id
if self.brickset is not None:
elif self.brickset is not None:
parameters['id'] = self.brickset.fields.id
# Use the minifigure number if present,
@@ -98,7 +101,7 @@ class RebrickablePart(BrickRecord):
# Use BrickLink color ID if available and not None, otherwise fall back to Rebrickable color
bricklink_color = getattr(self.fields, 'bricklink_color_id', None)
color_param = bricklink_color if bricklink_color is not None else self.fields.color
print(f'BrickLink URL parameters: part={part_param}, color={color_param}') # Debugging line, can be removed later
# print(f'BrickLink URL parameters: part={part_param}, color={color_param}') # Debugging line, can be removed later
return current_app.config['BRICKLINK_LINK_PART_PATTERN'].format( # noqa: E501
part=part_param,
color=color_param,
+34 -3
View File
@@ -95,6 +95,18 @@ class RebrickableSet(BrickRecord):
socket.auto_progress(message='Parsing set number')
set = parse_set(str(data['set']))
# Check if this is actually a minifigure (starts with fig-)
# If so, redirect to the minifigure handler
if set.startswith('fig-'):
from .individual_minifigure import IndividualMinifigure
# Transform data: minifigure handler expects 'figure' key instead of 'set'
minifig_data = data.copy()
minifig_data['figure'] = minifig_data.pop('set')
if from_download:
return IndividualMinifigure().download(socket, minifig_data)
else:
return IndividualMinifigure().load(socket, minifig_data)
socket.auto_progress(
message='Set {set}: loading from Rebrickable'.format(
set=set,
@@ -155,9 +167,18 @@ class RebrickableSet(BrickRecord):
# Return a short form of the Rebrickable set
def short(self, /, *, from_download: bool = False) -> dict[str, Any]:
# Use nil image URL if set image is null
image_url = self.fields.image
if image_url is None:
# Return path to nil.png from parts folder
image_url = RebrickableImage.static_url(
RebrickableImage.nil_name(),
'PARTS_FOLDER'
)
return {
'download': from_download,
'image': self.fields.image,
'image': image_url,
'name': self.fields.name,
'set': self.fields.set,
}
@@ -179,6 +200,15 @@ class RebrickableSet(BrickRecord):
return ''
# Compute the url for the bricklink page
def url_for_bricklink(self, /) -> str:
if current_app.config['BRICKLINK_LINKS']:
return current_app.config['BRICKLINK_LINK_SET_PATTERN'].format(
set_num=self.fields.set
)
return ''
# Compute the url for the refresh button
def url_for_refresh(self, /) -> str:
return url_for('set.refresh', set=self.fields.set)
@@ -187,17 +217,18 @@ class RebrickableSet(BrickRecord):
@staticmethod
def from_rebrickable(data: dict[str, Any], /, **_) -> dict[str, Any]:
# Extracting version and number
# Note: number can be alphanumeric (e.g., "McDR6US", "COMCON035")
number, _, version = str(data['set_num']).partition('-')
return {
'set': str(data['set_num']),
'number': int(number),
'number': str(number), # Keep as string to support alphanumeric sets
'version': int(version),
'name': str(data['name']),
'year': int(data['year']),
'theme_id': int(data['theme_id']),
'number_of_parts': int(data['num_parts']),
'image': str(data['set_img_url']),
'image': str(data['set_img_url']) if data['set_img_url'] is not None else None,
'url': str(data['set_url']),
'last_modified': str(data['last_modified_dt']),
}
+7 -12
View File
@@ -11,24 +11,19 @@ class RebrickableSetList(BrickRecordList[RebrickableSet]):
select_query: str = 'rebrickable/set/list'
refresh_query: str = 'rebrickable/set/need_refresh'
# All the sets
def all(self, /) -> Self:
# Implementation of abstract list method
def list(self, /, *, override_query: str | None = None, **context) -> None:
# Load the sets from the database
for record in self.select():
for record in self.select(override_query=override_query, **context):
rebrickable_set = RebrickableSet(record=record)
self.records.append(rebrickable_set)
# All the sets
def all(self, /) -> Self:
self.list()
return self
# Sets needing refresh
def need_refresh(self, /) -> Self:
# Load the sets from the database
for record in self.select(
override_query=self.refresh_query
):
rebrickable_set = RebrickableSet(record=record)
self.records.append(rebrickable_set)
self.list(override_query=self.refresh_query)
return self
+21
View File
@@ -1,3 +1,4 @@
from datetime import datetime
from sqlite3 import Row
from typing import Any, ItemsView
@@ -5,6 +6,26 @@ from .fields import BrickRecordFields
from .sql import BrickSQL
def format_timestamp(timestamp: float | str | None, format_key: str = 'PURCHASE_DATE_FORMAT') -> str:
if timestamp is not None:
from flask import current_app
# Handle legacy string dates stored in database (convert to numeric timestamp)
if isinstance(timestamp, str):
try:
# Try parsing as date string first
time = datetime.strptime(timestamp, '%Y/%m/%d')
except ValueError:
# If that fails, return the string as-is (shouldn't happen but safe fallback)
return timestamp
else:
# Normal case: numeric timestamp
time = datetime.fromtimestamp(timestamp)
return time.strftime(current_app.config.get(format_key, '%Y/%m/%d'))
return ''
# SQLite record
class BrickRecord(object):
select_query: str
+86 -1
View File
@@ -1,5 +1,6 @@
import re
from sqlite3 import Row
from typing import Any, Generator, Generic, ItemsView, TypeVar, TYPE_CHECKING
from typing import Any, Generator, Generic, ItemsView, Self, TypeVar, TYPE_CHECKING
from .fields import BrickRecordFields
from .sql import BrickSQL
@@ -72,6 +73,90 @@ class BrickRecordList(Generic[T]):
**context
)
# Generic pagination method for all record lists
def paginate(
self,
page: int = 1,
per_page: int = 50,
sort_field: str | None = None,
sort_order: str = 'asc',
count_query: str | None = None,
list_query: str | None = None,
field_mapping: dict[str, str] | None = None,
**filter_context: Any
) -> tuple['Self', int]:
"""Generic pagination implementation for all record lists"""
from .sql import BrickSQL
# Use provided queries or fall back to defaults
list_query = list_query or getattr(self, 'all_query', None)
if not list_query:
raise NotImplementedError("Subclass must define all_query")
# Calculate offset
offset = (page - 1) * per_page
# Get total count by wrapping the main query
if count_query:
# Use provided count query
count_result = BrickSQL().fetchone(count_query, **filter_context)
total_count = count_result['total_count'] if count_result else 0
else:
# Generate count by wrapping the main query (without ORDER BY, LIMIT, OFFSET)
count_context = {k: v for k, v in filter_context.items()
if k not in ['order', 'limit', 'offset']}
# Get the main query SQL without pagination clauses
main_sql = BrickSQL().load_query(list_query, **count_context)
# Remove ORDER BY, LIMIT, OFFSET clauses for counting
# Remove ORDER BY clause and everything after it that's not part of subqueries
count_sql = re.sub(r'\s+ORDER\s+BY\s+[^)]*?(\s+LIMIT|\s+OFFSET|$)', r'\1', main_sql, flags=re.IGNORECASE)
# Remove LIMIT and OFFSET
count_sql = re.sub(r'\s+LIMIT\s+\d+', '', count_sql, flags=re.IGNORECASE)
count_sql = re.sub(r'\s+OFFSET\s+\d+', '', count_sql, flags=re.IGNORECASE)
# Wrap in COUNT(*)
wrapped_sql = f"SELECT COUNT(*) as total_count FROM ({count_sql.strip()})"
count_result = BrickSQL().raw_execute(wrapped_sql, {}).fetchone()
total_count = count_result['total_count'] if count_result else 0
# Prepare sort order
order_clause = None
if sort_field and field_mapping and sort_field in field_mapping:
sql_field = field_mapping[sort_field]
direction = 'DESC' if sort_order.lower() == 'desc' else 'ASC'
order_clause = f'{sql_field} {direction}'
# Build pagination context
pagination_context = {
'limit': per_page,
'offset': offset,
'order': order_clause or getattr(self, 'order', None),
**filter_context
}
# Load paginated results using the existing list() method
# Check if this is a set list that needs do_theme parameter
if hasattr(self, 'themes'): # Only BrickSetList has this attribute
self.list(override_query=list_query, do_theme=True, **pagination_context)
else:
self.list(override_query=list_query, **pagination_context)
return self, total_count
# Base method that subclasses can override
def list(
self,
/,
*,
override_query: str | None = None,
**context: Any,
) -> None:
"""Load records from database - should be implemented by subclasses that use pagination"""
raise NotImplementedError("Subclass must implement list() method")
# Generic SQL parameters from fields
def sql_parameters(self, /) -> dict[str, Any]:
parameters: dict[str, Any] = {}
+126 -9
View File
@@ -30,6 +30,7 @@ class BrickSet(RebrickableSet):
insert_query: str = 'set/insert'
update_purchase_date_query: str = 'set/update/purchase_date'
update_purchase_price_query: str = 'set/update/purchase_price'
update_description_query: str = 'set/update/description'
# Delete a set
def delete(self, /) -> None:
@@ -56,8 +57,27 @@ class BrickSet(RebrickableSet):
# Grabbing the refresh flag
refresh: bool = bool(data.get('refresh', False))
# Generate an UUID for self
self.fields.id = str(uuid4())
# Generate an UUID for self (or use existing ID if refreshing)
if refresh:
# Find the existing set by set number to get its ID
result = BrickSQL().raw_execute(
'SELECT "id" FROM "bricktracker_sets" WHERE "set" = :set',
{'set': self.fields.set}
).fetchone()
if result:
# Use existing set ID
self.fields.id = result['id']
else:
# If set doesn't exist in database, treat as new import
refresh = False
self.fields.id = str(uuid4())
else:
self.fields.id = str(uuid4())
# Insert the rebrickable set into database FIRST
# This must happen before inserting bricktracker_sets due to FK constraint
self.insert_rebrickable()
if not refresh:
# Save the storage
@@ -74,25 +94,66 @@ class BrickSet(RebrickableSet):
)
self.fields.purchase_location = purchase_location.fields.id
# Insert into database
# Save the purchase date
purchase_date = data.get('purchase_date', None)
if purchase_date == '':
purchase_date = None
if purchase_date is not None:
try:
purchase_date = datetime.strptime(
purchase_date, '%Y/%m/%d'
).timestamp()
except Exception:
purchase_date = None
self.fields.purchase_date = purchase_date
# Save the purchase price
purchase_price = data.get('purchase_price', None)
if purchase_price == '':
purchase_price = None
if purchase_price is not None:
try:
purchase_price = float(purchase_price)
except Exception:
purchase_price = None
self.fields.purchase_price = purchase_price
# Save the description/notes
description = data.get('description', None)
if description == '':
description = None
self.fields.description = description
# Insert into database (deferred - will execute at final commit)
# All operations are atomic - if anything fails, nothing is committed
self.insert(commit=False)
# Save the owners
# Save the owners (deferred - will execute at final commit)
owners: list[str] = list(data.get('owners', []))
for id in owners:
owner = BrickSetOwnerList.get(id)
owner.update_set_state(self, state=True)
owner.update_set_state(self, state=True, commit=False)
# Save the tags
# Save the statuses (deferred - will execute at final commit)
statuses: list[str] = list(data.get('statuses', []))
for id in statuses:
status = BrickSetStatusList.get(id)
status.update_set_state(self, state=True, commit=False)
# Save the tags (deferred - will execute at final commit)
tags: list[str] = list(data.get('tags', []))
for id in tags:
tag = BrickSetTagList.get(id)
tag.update_set_state(self, state=True)
tag.update_set_state(self, state=True, commit=False)
# Insert the rebrickable set into database
self.insert_rebrickable()
# If refreshing, prepare temp table for tracking parts across both set and minifigs
if refresh:
sql = BrickSQL()
sql.execute('part/create_temp_refresh_tracking_table', defer=False)
sql.execute('part/clear_temp_refresh_tracking_table', defer=False)
# Load the inventory
if not BrickPartList.download(socket, self, refresh=refresh):
@@ -102,6 +163,15 @@ class BrickSet(RebrickableSet):
if not BrickMinifigureList.download(socket, self, refresh=refresh):
return False
# If refreshing, clean up orphaned parts after all parts have been processed
if refresh:
# Delete orphaned parts (parts that weren't in the API response)
BrickSQL().execute(
'part/delete_untracked_parts',
parameters={'id': self.fields.id},
defer=False
)
# Commit the transaction to the database
socket.auto_progress(
message='Set {set}: writing to the database'.format(
@@ -169,6 +239,20 @@ class BrickSet(RebrickableSet):
else:
return ''
# Purchase date max formatted for consolidated sets
def purchase_date_max_formatted(self, /, *, standard: bool = False) -> str:
if hasattr(self.fields, 'purchase_date_max') and self.fields.purchase_date_max is not None:
time = datetime.fromtimestamp(self.fields.purchase_date_max)
if standard:
return time.strftime('%Y/%m/%d')
else:
return time.strftime(
current_app.config['PURCHASE_DATE_FORMAT']
)
else:
return ''
# Purchase price with currency
def purchase_price(self, /) -> str:
if self.fields.purchase_price is not None:
@@ -339,3 +423,36 @@ class BrickSet(RebrickableSet):
# Update purchase price url
def url_for_purchase_price(self, /) -> str:
return url_for('set.update_purchase_price', id=self.fields.id)
# Update description
def update_description(self, json: Any | None, /) -> Any:
value = json.get('value', None) # type: ignore
if value == '':
value = None
self.fields.description = value
rows, _ = BrickSQL().execute_and_commit(
self.update_description_query,
parameters=self.sql_parameters()
)
if rows != 1:
raise DatabaseException('Could not update the description for set {set} ({id})'.format( # noqa: E501
set=self.fields.set,
id=self.fields.id,
))
# Info
logger.info('Description changed to "{value}" for set {set} ({id})'.format( # noqa: E501
value=value,
set=self.fields.set,
id=self.fields.id,
))
return value
# Update description url
def url_for_description(self, /) -> str:
return url_for('set.update_description', id=self.fields.id)
+525 -5
View File
@@ -13,14 +13,19 @@ from .set_storage_list import BrickSetStorageList
from .set_tag import BrickSetTag
from .set_tag_list import BrickSetTagList
from .set import BrickSet
from .theme_list import BrickThemeList
from .instructions_list import BrickInstructionsList
# All the sets from the database
class BrickSetList(BrickRecordList[BrickSet]):
themes: list[str]
years: list[int]
order: str
# Queries
all_query: str = 'set/list/all'
consolidated_query: str = 'set/list/consolidated'
damaged_minifigure_query: str = 'set/list/damaged_minifigure'
damaged_part_query: str = 'set/list/damaged_part'
generic_query: str = 'set/list/generic'
@@ -31,23 +36,525 @@ class BrickSetList(BrickRecordList[BrickSet]):
using_minifigure_query: str = 'set/list/using_minifigure'
using_part_query: str = 'set/list/using_part'
using_storage_query: str = 'set/list/using_storage'
using_purchase_location_query: str = 'set/list/using_purchase_location'
def __init__(self, /):
super().__init__()
# Placeholders
self.themes = []
self.years = []
# Store the order for this list
self.order = current_app.config['SETS_DEFAULT_ORDER']
# All the sets
def all(self, /) -> Self:
# Load the sets from the database
self.list(do_theme=True)
# Load the sets from the database with metadata context for filtering
filter_context = {
'owners': BrickSetOwnerList.as_columns(),
'statuses': BrickSetStatusList.as_columns(),
'tags': BrickSetTagList.as_columns(),
}
self.list(do_theme=True, **filter_context)
return self
# All sets in consolidated/grouped view
def all_consolidated(self, /) -> Self:
# Load the sets from the database using consolidated query with metadata context
filter_context = {
'owners_dict': BrickSetOwnerList.as_column_mapping(),
'statuses_dict': BrickSetStatusList.as_column_mapping(),
'tags_dict': BrickSetTagList.as_column_mapping(),
}
self.list(override_query=self.consolidated_query, do_theme=True, **filter_context)
return self
# All sets with pagination and filtering
def all_filtered_paginated(
self,
search_query: str | None = None,
page: int = 1,
per_page: int = 50,
sort_field: str | None = None,
sort_order: str = 'asc',
status_filter: str | None = None,
theme_filter: str | None = None,
owner_filter: str | None = None,
purchase_location_filter: str | None = None,
storage_filter: str | None = None,
tag_filter: str | None = None,
year_filter: str | None = None,
duplicate_filter: bool = False,
use_consolidated: bool = True
) -> tuple[Self, int]:
# Convert theme name to theme ID for filtering
theme_id_filter = None
if theme_filter:
# Check if this is a NOT filter
if theme_filter.startswith('-'):
# Extract the actual theme value without the "-" prefix
actual_theme = theme_filter[1:]
theme_id = self._theme_name_to_id(actual_theme)
# Re-add the "-" prefix to the theme ID
theme_id_filter = f'-{theme_id}' if theme_id else None
else:
theme_id_filter = self._theme_name_to_id(theme_filter)
# Check if any filters are applied
has_filters = any([status_filter, theme_id_filter, owner_filter, purchase_location_filter, storage_filter, tag_filter, year_filter, duplicate_filter])
# Prepare filter context
filter_context = {
'search_query': search_query,
'status_filter': status_filter,
'theme_filter': theme_id_filter, # Use converted theme ID
'owner_filter': owner_filter,
'purchase_location_filter': purchase_location_filter,
'storage_filter': storage_filter,
'tag_filter': tag_filter,
'year_filter': year_filter,
'duplicate_filter': duplicate_filter,
'owners': BrickSetOwnerList.as_columns(),
'statuses': BrickSetStatusList.as_columns(),
'tags': BrickSetTagList.as_columns(),
'owners_dict': BrickSetOwnerList.as_column_mapping(),
'statuses_dict': BrickSetStatusList.as_column_mapping(),
'tags_dict': BrickSetTagList.as_column_mapping(),
}
# Field mapping for sorting
if use_consolidated:
field_mapping = {
'set': '"rebrickable_sets"."number", "rebrickable_sets"."version"',
'name': '"rebrickable_sets"."name"',
'year': '"rebrickable_sets"."year"',
'parts': '"rebrickable_sets"."number_of_parts"',
'theme': '"rebrickable_sets"."theme_id"',
'minifigures': '"total_minifigures"',
'missing': '"total_missing"',
'damaged': '"total_damaged"',
'instances': '"instance_count"', # New field for consolidated view
'purchase-date': '"purchase_date"', # Use the MIN aggregated value
'purchase-price': '"purchase_price"' # Use the MIN aggregated value
}
else:
field_mapping = {
'set': '"rebrickable_sets"."number", "rebrickable_sets"."version"',
'name': '"rebrickable_sets"."name"',
'year': '"rebrickable_sets"."year"',
'parts': '"rebrickable_sets"."number_of_parts"',
'theme': '"rebrickable_sets"."theme_id"',
'minifigures': '"total_minifigures"', # Use the alias from the SQL query
'missing': '"total_missing"', # Use the alias from the SQL query
'damaged': '"total_damaged"', # Use the alias from the SQL query
'purchase-date': '"bricktracker_sets"."purchase_date"',
'purchase-price': '"bricktracker_sets"."purchase_price"'
}
# Choose query based on consolidation preference and filter complexity
# Owner/tag filters still need to fall back to non-consolidated for now
# due to complex aggregation requirements
complex_filters = [owner_filter, tag_filter]
if use_consolidated and not any(complex_filters):
query_to_use = self.consolidated_query
else:
# Use filtered query when consolidation is disabled or complex filters applied
query_to_use = 'set/list/all_filtered'
# Handle instructions filtering
if status_filter in ['has-missing-instructions', '-has-missing-instructions']:
# For instructions filter, we need to load all sets first, then filter and paginate
return self._all_filtered_paginated_with_instructions(
search_query, page, per_page, sort_field, sort_order,
status_filter, theme_id_filter, owner_filter,
purchase_location_filter, storage_filter, tag_filter
)
# Handle special case for set sorting with multiple columns
if sort_field == 'set' and field_mapping:
# Create custom order clause for set sorting
direction = 'DESC' if sort_order.lower() == 'desc' else 'ASC'
custom_order = f'"rebrickable_sets"."number" {direction}, "rebrickable_sets"."version" {direction}'
filter_context['order'] = custom_order
# Remove set from field mapping to avoid double-processing
field_mapping_copy = field_mapping.copy()
field_mapping_copy.pop('set', None)
field_mapping = field_mapping_copy
sort_field = None # Disable automatic ORDER BY construction
# Normal SQL-based filtering and pagination
result, total_count = self.paginate(
page=page,
per_page=per_page,
sort_field=sort_field,
sort_order=sort_order,
list_query=query_to_use,
field_mapping=field_mapping,
**filter_context
)
# Populate themes and years for filter dropdown from filtered dataset (not just current page)
# For themes dropdown, exclude theme_filter to show ALL available themes
themes_context = filter_context.copy()
themes_context.pop('theme_filter', None)
result._populate_themes_from_filtered_dataset(
query_to_use,
**themes_context
)
# For years dropdown, exclude ALL filters to show ALL available years
years_context = {
'search_query': filter_context.get('search_query'),
}
result._populate_years_from_filtered_dataset(
query_to_use,
**years_context
)
return result, total_count
def _populate_themes(self) -> None:
"""Populate themes list from the current records"""
themes = set()
for record in self.records:
if hasattr(record, 'theme') and hasattr(record.theme, 'name'):
themes.add(record.theme.name)
self.themes = list(themes)
self.themes.sort()
def _populate_years(self) -> None:
"""Populate years list from the current records"""
years = set()
for record in self.records:
if hasattr(record, 'fields') and hasattr(record.fields, 'year') and record.fields.year:
years.add(record.fields.year)
self.years = list(years)
self.years.sort(reverse=True) # Most recent years first
def _theme_name_to_id(self, theme_name_or_id: str) -> str | None:
"""Convert a theme name or ID to theme ID for filtering"""
try:
# Check if the input is already a numeric theme ID
if theme_name_or_id.isdigit():
# Input is already a theme ID, validate it exists
theme_list = BrickThemeList()
theme_id = int(theme_name_or_id)
if theme_id in theme_list.themes:
return str(theme_id)
else:
return None
# Input is a theme name, convert to ID
from .sql import BrickSQL
theme_list = BrickThemeList()
# Find all theme IDs that match the name
matching_theme_ids = []
for theme_id, theme in theme_list.themes.items():
if theme.name.lower() == theme_name_or_id.lower():
matching_theme_ids.append(str(theme_id))
if not matching_theme_ids:
return None
# If only one match, return it
if len(matching_theme_ids) == 1:
return matching_theme_ids[0]
# Multiple matches - check which theme ID actually has sets in the user's collection
sql = BrickSQL()
for theme_id in matching_theme_ids:
result = sql.fetchone(
'set/check_theme_exists',
theme_id=theme_id
)
count = result['count'] if result else 0
if count > 0:
return theme_id
# If none have sets, return the first match (fallback)
return matching_theme_ids[0]
except Exception:
# If themes can't be loaded, return None to disable theme filtering
return None
def _theme_id_to_name(self, theme_id: str) -> str | None:
"""Convert a theme ID to theme name (lowercase) for dropdown display"""
try:
if not theme_id or not theme_id.isdigit():
return None
from .theme_list import BrickThemeList
theme_list = BrickThemeList()
theme_id_int = int(theme_id)
if theme_id_int in theme_list.themes:
return theme_list.themes[theme_id_int].name.lower()
return None
except Exception as e:
# For debugging - log the exception
import logging
logger = logging.getLogger(__name__)
logger.warning(f"Failed to convert theme ID {theme_id} to name: {e}")
return None
def _all_filtered_paginated_with_instructions(
self,
search_query: str | None,
page: int,
per_page: int,
sort_field: str | None,
sort_order: str,
status_filter: str,
theme_id_filter: str | None,
owner_filter: str | None,
purchase_location_filter: str | None,
storage_filter: str | None,
tag_filter: str | None
) -> tuple[Self, int]:
"""Handle filtering when instructions filter is involved"""
try:
# Load all sets first (without pagination) with full metadata
all_sets = BrickSetList()
filter_context = {
'owners': BrickSetOwnerList.as_columns(),
'statuses': BrickSetStatusList.as_columns(),
'tags': BrickSetTagList.as_columns(),
}
all_sets.list(do_theme=True, **filter_context)
# Load instructions list
instructions_list = BrickInstructionsList()
instruction_sets = set(instructions_list.sets.keys())
# Apply all filters manually
filtered_records = []
for record in all_sets.records:
# Apply instructions filter
set_id = record.fields.set
has_instructions = set_id in instruction_sets
if status_filter == 'has-missing-instructions' and has_instructions:
continue # Skip sets that have instructions
elif status_filter == '-has-missing-instructions' and not has_instructions:
continue # Skip sets that don't have instructions
# Apply other filters manually
if search_query and not self._matches_search(record, search_query):
continue
if theme_id_filter and not self._matches_theme(record, theme_id_filter):
continue
if owner_filter and not self._matches_owner(record, owner_filter):
continue
if purchase_location_filter and not self._matches_purchase_location(record, purchase_location_filter):
continue
if storage_filter and not self._matches_storage(record, storage_filter):
continue
if tag_filter and not self._matches_tag(record, tag_filter):
continue
filtered_records.append(record)
# Apply sorting
if sort_field:
filtered_records = self._sort_records(filtered_records, sort_field, sort_order)
# Calculate pagination
total_count = len(filtered_records)
start_index = (page - 1) * per_page
end_index = start_index + per_page
paginated_records = filtered_records[start_index:end_index]
# Create result
result = BrickSetList()
result.records = paginated_records
# Copy themes and years from the source that has all sets
result.themes = all_sets.themes if hasattr(all_sets, 'themes') else []
result.years = all_sets.years if hasattr(all_sets, 'years') else []
# If themes or years weren't populated, populate them from current records
if not result.themes:
result._populate_themes()
if not result.years:
result._populate_years()
return result, total_count
except Exception:
# Fall back to normal pagination without instructions filter
return self.all_filtered_paginated(
search_query, page, per_page, sort_field, sort_order,
None, theme_id_filter, owner_filter,
purchase_location_filter, storage_filter, tag_filter
)
def _populate_years_from_filtered_dataset(self, query_name: str, **filter_context) -> None:
"""Populate years list from all available records in filtered dataset"""
try:
# Use a simplified query to get just distinct years
years_context = dict(filter_context)
years_context.pop('limit', None)
years_context.pop('offset', None)
# Use a special lightweight query for years
year_records = super().select(
override_query='set/list/years_only',
**years_context
)
# Extract years from records
years = set()
for record in year_records:
year = record['year'] if 'year' in record.keys() else None
if year:
years.add(year)
if years:
self.years = list(years)
self.years.sort(reverse=True) # Most recent years first
else:
import logging
logger = logging.getLogger(__name__)
logger.warning("No years found in filtered dataset, falling back to current page")
self._populate_years()
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.error(f"Exception in _populate_years_from_filtered_dataset: {e}")
self._populate_years()
def _populate_themes_from_filtered_dataset(self, query_name: str, **filter_context) -> None:
"""Populate themes list from filtered dataset (all pages, not just current page)"""
try:
from .theme_list import BrickThemeList
# Use a simplified query to get just distinct theme_ids
theme_context = dict(filter_context)
theme_context.pop('limit', None)
theme_context.pop('offset', None)
# Use a special lightweight query for themes
theme_records = super().select(
override_query='set/list/themes_only',
**theme_context
)
# Convert to theme names
theme_list = BrickThemeList()
themes = set()
for record in theme_records:
theme_id = record.get('theme_id')
if theme_id:
theme = theme_list.get(theme_id)
if theme and hasattr(theme, 'name'):
themes.add(theme.name)
self.themes = list(themes)
self.themes.sort()
except Exception:
# Fall back to simpler approach: get themes from ALL sets (ignoring filters)
# This is better than showing only current page themes
try:
from .theme_list import BrickThemeList
all_sets = BrickSetList()
all_sets.list(do_theme=True)
themes = set()
years = set()
for record in all_sets.records:
if hasattr(record, 'theme') and hasattr(record.theme, 'name'):
themes.add(record.theme.name)
if hasattr(record, 'fields') and hasattr(record.fields, 'year') and record.fields.year:
years.add(record.fields.year)
self.themes = list(themes)
self.themes.sort()
self.years = list(years)
self.years.sort(reverse=True)
except Exception:
# Final fallback to current page themes
self._populate_themes()
self._populate_years()
def _matches_search(self, record, search_query: str) -> bool:
"""Check if record matches search query"""
search_lower = search_query.lower()
return (search_lower in record.fields.name.lower() or
search_lower in record.fields.set.lower())
def _matches_theme(self, record, theme_id: str) -> bool:
"""Check if record matches theme filter"""
return str(record.fields.theme_id) == theme_id
def _matches_owner(self, record, owner_filter: str) -> bool:
"""Check if record matches owner filter"""
if not owner_filter.startswith('owner-'):
return True
# Convert owner-uuid format to owner_uuid column name
owner_column = owner_filter.replace('-', '_')
# Check if record has this owner attribute set to 1
return hasattr(record.fields, owner_column) and getattr(record.fields, owner_column) == 1
def _matches_purchase_location(self, record, location_filter: str) -> bool:
"""Check if record matches purchase location filter"""
return record.fields.purchase_location == location_filter
def _matches_storage(self, record, storage_filter: str) -> bool:
"""Check if record matches storage filter"""
return record.fields.storage == storage_filter
def _matches_tag(self, record, tag_filter: str) -> bool:
"""Check if record matches tag filter"""
if not tag_filter.startswith('tag-'):
return True
# Convert tag-uuid format to tag_uuid column name
tag_column = tag_filter.replace('-', '_')
# Check if record has this tag attribute set to 1
return hasattr(record.fields, tag_column) and getattr(record.fields, tag_column) == 1
def _sort_records(self, records, sort_field: str, sort_order: str):
"""Sort records manually"""
reverse = sort_order == 'desc'
if sort_field == 'set':
return sorted(records, key=lambda r: self._set_sort_key(r.fields.set), reverse=reverse)
elif sort_field == 'name':
return sorted(records, key=lambda r: r.fields.name, reverse=reverse)
elif sort_field == 'year':
return sorted(records, key=lambda r: r.fields.year, reverse=reverse)
elif sort_field == 'parts':
return sorted(records, key=lambda r: r.fields.number_of_parts, reverse=reverse)
# Add more sort fields as needed
return records
def _set_sort_key(self, set_number: str) -> tuple:
"""Generate sort key for set numbers like '10121-1' -> (10121, 1)"""
try:
if '-' in set_number:
main_part, version_part = set_number.split('-', 1)
return (int(main_part), int(version_part))
else:
return (int(set_number), 0)
except (ValueError, TypeError):
# Fallback to string sorting if parsing fails
return (float('inf'), set_number)
# Sets with a minifigure part damaged
def damaged_minifigure(self, figure: str, /) -> Self:
# Save the parameters to the fields
@@ -93,6 +600,7 @@ class BrickSetList(BrickRecordList[BrickSet]):
**context: Any,
) -> None:
themes = set()
years = set()
if order is None:
order = self.order
@@ -102,20 +610,22 @@ class BrickSetList(BrickRecordList[BrickSet]):
override_query=override_query,
order=order,
limit=limit,
owners=BrickSetOwnerList.as_columns(),
statuses=BrickSetStatusList.as_columns(),
tags=BrickSetTagList.as_columns(),
**context
):
brickset = BrickSet(record=record)
self.records.append(brickset)
if do_theme:
themes.add(brickset.theme.name)
if hasattr(brickset, 'fields') and hasattr(brickset.fields, 'year') and brickset.fields.year:
years.add(brickset.fields.year)
# Convert the set into a list and sort it
if do_theme:
self.themes = list(themes)
self.themes.sort()
self.years = list(years)
self.years.sort(reverse=True) # Most recent years first
# Sets missing a minifigure part
def missing_minifigure(self, figure: str, /) -> Self:
@@ -169,6 +679,16 @@ class BrickSetList(BrickRecordList[BrickSet]):
return self
# Sets using a purchase location
def using_purchase_location(self, purchase_location: BrickSetPurchaseLocation, /) -> Self:
# Save the parameters to the fields
self.fields.purchase_location = purchase_location.fields.id
# Load the sets from the database
self.list(override_query=self.using_purchase_location_query)
return self
# Helper to build the metadata lists
def set_metadata_lists(
+9
View File
@@ -1,5 +1,7 @@
from .metadata import BrickMetadata
from flask import url_for
# Lego set purchase location metadata
class BrickSetPurchaseLocation(BrickMetadata):
@@ -11,3 +13,10 @@ class BrickSetPurchaseLocation(BrickMetadata):
select_query: str = 'set/metadata/purchase_location/select'
update_field_query: str = 'set/metadata/purchase_location/update/field'
update_set_value_query: str = 'set/metadata/purchase_location/update/value'
# Self url
def url(self, /) -> str:
return url_for(
'purchase_location.details',
id=self.fields.id,
)
+52
View File
@@ -1,4 +1,6 @@
from .metadata import BrickMetadata
from .exceptions import ErrorException
from .sql import BrickSQL
from flask import url_for
@@ -13,6 +15,7 @@ class BrickSetStorage(BrickMetadata):
select_query: str = 'set/metadata/storage/select'
update_field_query: str = 'set/metadata/storage/update/field'
update_set_value_query: str = 'set/metadata/storage/update/value'
count_usage_query: str = 'set/metadata/storage/count_usage'
# Self url
def url(self, /) -> str:
@@ -20,3 +23,52 @@ class BrickSetStorage(BrickMetadata):
'storage.details',
id=self.fields.id,
)
# Delete from database - check if storage is in use first
def delete(self, /) -> None:
# Check if storage is being used
sql = BrickSQL()
result = sql.fetchone(self.count_usage_query, parameters={'id': self.fields.id})
if result:
sets_count = result[0]
minifigures_count = result[1]
parts_count = result[2]
lots_count = result[3]
total_count = sets_count + minifigures_count + parts_count + lots_count
if total_count > 0:
# Build error message with counts and link
error_parts = []
if sets_count > 0:
error_parts.append('{count} set{plural}'.format(
count=sets_count,
plural='s' if sets_count != 1 else ''
))
if minifigures_count > 0:
error_parts.append('{count} individual minifigure{plural}'.format(
count=minifigures_count,
plural='s' if minifigures_count != 1 else ''
))
if parts_count > 0:
error_parts.append('{count} individual part{plural}'.format(
count=parts_count,
plural='s' if parts_count != 1 else ''
))
if lots_count > 0:
error_parts.append('{count} part lot{plural}'.format(
count=lots_count,
plural='s' if lots_count != 1 else ''
))
error_message = 'Cannot delete storage location "{name}". You need to remove {items} from this storage before it can be deleted. <a href="{url}">View storage details</a>'.format(
name=self.fields.name,
items=', '.join(error_parts),
url=self.url()
)
raise ErrorException(error_message)
# If not in use, proceed with deletion
super().delete()
+196 -4
View File
@@ -6,6 +6,8 @@ from flask_socketio import SocketIO
from .instructions import BrickInstructions
from .instructions_list import BrickInstructionsList
from .peeron_instructions import PeeronInstructions, PeeronPage
from .peeron_pdf import PeeronPDF
from .set import BrickSet
from .socket_decorator import authenticated_socket, rebrickable_socket
from .sql import close as sql_close
@@ -16,11 +18,22 @@ logger = logging.getLogger(__name__)
MESSAGES: Final[dict[str, str]] = {
'COMPLETE': 'complete',
'CONNECT': 'connect',
'CREATE_LOT': 'create_lot',
'CREATE_BULK_INDIVIDUAL_PARTS': 'create_bulk_individual_parts',
'DISCONNECT': 'disconnect',
'DOWNLOAD_INSTRUCTIONS': 'download_instructions',
'DOWNLOAD_PEERON_PAGES': 'download_peeron_pages',
'FAIL': 'fail',
'IMPORT_MINIFIGURE': 'import_minifigure',
'IMPORT_SET': 'import_set',
'LOAD_MINIFIGURE': 'load_minifigure',
'LOAD_PART': 'load_part',
'LOAD_PART_COLORS': 'load_part_colors',
'LOAD_PEERON_PAGES': 'load_peeron_pages',
'LOAD_SET': 'load_set',
'MINIFIGURE_LOADED': 'minifigure_loaded',
'PART_COLORS_LOADED': 'part_colors_loaded',
'PART_LOADED': 'part_loaded',
'PROGRESS': 'progress',
'SET_LOADED': 'set_loaded',
}
@@ -61,6 +74,8 @@ class BrickSocket(object):
)
# Inject CORS if a domain is defined
# Note: For reverse proxy deployments, leave BK_DOMAIN_NAME empty to allow all origins
# When empty, Socket.IO defaults to permissive CORS which works with reverse proxies
if app.config['DOMAIN_NAME'] != '':
kwargs['cors_allowed_origins'] = app.config['DOMAIN_NAME']
@@ -70,7 +85,12 @@ class BrickSocket(object):
*args,
**kwargs,
path=app.config['SOCKET_PATH'],
async_mode='eventlet',
async_mode='gevent',
# Enable detailed logging in debug mode for troubleshooting
logger=app.config['DEBUG'],
# Ping/pong settings for mobile network resilience
ping_timeout=30, # Wait 30s for pong response before disconnecting
ping_interval=25, # Send ping every 25s to keep connection alive
)
# Store the socket in the app config
@@ -82,9 +102,23 @@ class BrickSocket(object):
self.connected()
@self.socket.on(MESSAGES['DISCONNECT'], namespace=self.namespace)
def disconnect() -> None:
def disconnect(reason=None) -> None:
self.disconnected()
@self.socket.on('connect_error', namespace=self.namespace)
def connect_error(data) -> None:
logger.error(f'Socket CONNECT_ERROR: {data}')
@self.socket.on_error(namespace=self.namespace)
def error_handler(e) -> None:
logger.error(f'Socket ERROR: {e}')
try:
user_agent = request.headers.get('User-Agent', 'unknown')
remote_addr = request.remote_addr
logger.error(f'Socket ERROR details: ip={remote_addr}, ua={user_agent[:80]}...')
except Exception:
pass
@self.socket.on(MESSAGES['DOWNLOAD_INSTRUCTIONS'], namespace=self.namespace) # noqa: E501
@authenticated_socket(self)
def download_instructions(data: dict[str, Any], /) -> None:
@@ -106,6 +140,84 @@ class BrickSocket(object):
BrickInstructionsList(force=True)
@self.socket.on(MESSAGES['LOAD_PEERON_PAGES'], namespace=self.namespace) # noqa: E501
def load_peeron_pages(data: dict[str, Any], /) -> None:
logger.debug('Socket: LOAD_PEERON_PAGES={data} (from: {fr})'.format(
data=data, fr=request.remote_addr))
try:
set_number = data.get('set', '')
if not set_number:
self.fail(message="Set number is required")
return
# Create Peeron instructions instance with socket for progress reporting
peeron = PeeronInstructions(set_number, socket=self)
# Find pages (this will report progress for thumbnail caching)
pages = peeron.find_pages()
# Complete the operation (JavaScript will handle redirect)
self.complete(message=f"Found {len(pages)} instruction pages on Peeron")
except Exception as e:
logger.error(f"Error in load_peeron_pages: {e}")
self.fail(message=f"Error loading Peeron pages: {e}")
@self.socket.on(MESSAGES['DOWNLOAD_PEERON_PAGES'], namespace=self.namespace) # noqa: E501
@authenticated_socket(self)
def download_peeron_pages(data: dict[str, Any], /) -> None:
logger.debug('Socket: DOWNLOAD_PEERON_PAGES={data} (from: {fr})'.format(
data=data,
fr=request.sid, # type: ignore
))
try:
# Extract data from the request
set_number = data.get('set', '')
pages_data = data.get('pages', [])
if not set_number:
raise ValueError("Set number is required")
if not pages_data:
raise ValueError("No pages selected")
# Parse set number
if '-' in set_number:
parts = set_number.split('-', 1)
set_num = parts[0]
version_num = parts[1] if len(parts) > 1 else '1'
else:
set_num = set_number
version_num = '1'
# Convert page data to PeeronPage objects
pages = []
for page_data in pages_data:
page = PeeronPage(
page_number=page_data.get('page_number', ''),
original_image_url=page_data.get('original_image_url', ''),
cached_full_image_path=page_data.get('cached_full_image_path', ''),
cached_thumbnail_url='', # Not needed for PDF generation
alt_text=page_data.get('alt_text', ''),
rotation=page_data.get('rotation', 0)
)
pages.append(page)
# Create PDF generator and start download
pdf_generator = PeeronPDF(set_num, version_num, pages, socket=self)
pdf_generator.create_pdf()
# Note: Cache cleanup is handled automatically by pdf_generator.create_pdf()
# Refresh instructions list to include new PDF
BrickInstructionsList(force=True)
except Exception as e:
logger.error(f"Error in download_peeron_pages: {e}")
self.fail(message=f"Error downloading Peeron pages: {e}")
@self.socket.on(MESSAGES['IMPORT_SET'], namespace=self.namespace)
@rebrickable_socket(self)
def import_set(data: dict[str, Any], /) -> None:
@@ -125,6 +237,67 @@ class BrickSocket(object):
BrickSet().load(self, data)
@self.socket.on(MESSAGES['IMPORT_MINIFIGURE'], namespace=self.namespace)
@rebrickable_socket(self)
def import_minifigure(data: dict[str, Any], /) -> None:
logger.debug('Socket: IMPORT_MINIFIGURE={data} (from: {fr})'.format(
data=data,
fr=request.sid, # type: ignore
))
from .individual_minifigure import IndividualMinifigure
IndividualMinifigure().download(self, data)
@self.socket.on(MESSAGES['LOAD_MINIFIGURE'], namespace=self.namespace)
def load_minifigure(data: dict[str, Any], /) -> None:
logger.debug('Socket: LOAD_MINIFIGURE={data} (from: {fr})'.format(
data=data,
fr=request.sid, # type: ignore
))
from .individual_minifigure import IndividualMinifigure
IndividualMinifigure().load(self, data)
@self.socket.on(MESSAGES['LOAD_PART'], namespace=self.namespace)
def load_part(data: dict[str, Any], /) -> None:
logger.debug('Socket: LOAD_PART={data} (from: {fr})'.format(
data=data,
fr=request.sid, # type: ignore
))
from .individual_part import IndividualPart
IndividualPart().add(self, data)
@self.socket.on(MESSAGES['LOAD_PART_COLORS'], namespace=self.namespace)
def load_part_colors(data: dict[str, Any], /) -> None:
logger.debug('Socket: LOAD_PART_COLORS={data} (from: {fr})'.format(
data=data,
fr=request.sid, # type: ignore
))
from .individual_part import IndividualPart
IndividualPart().load_colors(self, data)
@self.socket.on(MESSAGES['CREATE_LOT'], namespace=self.namespace)
@rebrickable_socket(self)
def create_lot(data: dict[str, Any], /) -> None:
logger.debug('Socket: CREATE_LOT (from: {fr})'.format(
fr=request.sid, # type: ignore
))
from .individual_part_lot import IndividualPartLot
IndividualPartLot().create(self, data)
@self.socket.on(MESSAGES['CREATE_BULK_INDIVIDUAL_PARTS'], namespace=self.namespace)
@rebrickable_socket(self)
def create_bulk_individual_parts(data: dict[str, Any], /) -> None:
logger.debug('Socket: CREATE_BULK_INDIVIDUAL_PARTS (from: {fr})'.format(
fr=request.sid, # type: ignore
))
from .individual_part import IndividualPart
IndividualPart().create_bulk(self, data)
# Update the progress auto-incrementing
def auto_progress(
self,
@@ -150,13 +323,32 @@ class BrickSocket(object):
# Socket is connected
def connected(self, /) -> Tuple[str, int]:
logger.debug('Socket: client connected')
# Get detailed connection info for debugging
try:
sid = request.sid # type: ignore
transport = request.environ.get('HTTP_UPGRADE', 'polling')
user_agent = request.headers.get('User-Agent', 'unknown')
remote_addr = request.remote_addr
# Check if it's likely a mobile device
is_mobile = any(x in user_agent.lower() for x in ['iphone', 'ipad', 'android', 'mobile'])
logger.info(
f'Socket CONNECTED: sid={sid}, transport={transport}, '
f'ip={remote_addr}, mobile={is_mobile}, ua={user_agent[:80]}...'
)
except Exception as e:
logger.warning(f'Socket connected but failed to get details: {e}')
return '', 301
# Socket is disconnected
def disconnected(self, /) -> None:
logger.debug('Socket: client disconnected')
try:
sid = request.sid # type: ignore
logger.info(f'Socket DISCONNECTED: sid={sid}')
except Exception as e:
logger.info(f'Socket disconnected (sid unavailable): {e}')
# Emit a message through the socket
def emit(self, name: str, *arg, all=False) -> None:
+23
View File
@@ -60,6 +60,29 @@ class BrickSQL(object):
# Grab a cursor
self.cursor = self.connection.cursor()
# SQLite Performance Optimizations
logger.debug('SQLite3: applying performance optimizations')
# Enable WAL (Write-Ahead Logging) mode for better concurrency
# Allows multiple readers while writer is active
self.connection.execute('PRAGMA journal_mode=WAL')
# Increase cache size for better query performance
# Default is 2000 pages, increase to 10000 pages (~40MB for 4KB pages)
self.connection.execute('PRAGMA cache_size=10000')
# Store temporary tables and indices in memory for speed
self.connection.execute('PRAGMA temp_store=memory')
# Enable foreign key constraints (good practice)
self.connection.execute('PRAGMA foreign_keys=ON')
# Optimize for read performance (trade write speed for read speed)
self.connection.execute('PRAGMA synchronous=NORMAL')
# Analyze database statistics for better query planning
self.connection.execute('ANALYZE')
# Grab the version and check
try:
version = self.fetchone('schema/get_version')
@@ -0,0 +1,24 @@
-- A bit unsafe as it does not use a prepared statement but it
-- should not be possible to inject anything through the {{ id }} context
BEGIN TRANSACTION;
-- Delete associated parts first
DELETE FROM "bricktracker_individual_minifigure_parts"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
-- Delete metadata from consolidated tables
DELETE FROM "bricktracker_set_owners"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
DELETE FROM "bricktracker_set_statuses"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
DELETE FROM "bricktracker_set_tags"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
-- Delete the individual minifigure itself
DELETE FROM "bricktracker_individual_minifigures"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
COMMIT;
@@ -0,0 +1,19 @@
INSERT OR IGNORE INTO "bricktracker_individual_minifigures" (
"id",
"figure",
"quantity",
"description",
"storage",
"purchase_location",
"purchase_date",
"purchase_price"
) VALUES (
:id,
:figure,
:quantity,
:description,
:storage,
:purchase_location,
:purchase_date,
:purchase_price
)
@@ -0,0 +1,45 @@
-- List all individual minifigures
SELECT
"bricktracker_individual_minifigures"."id",
"bricktracker_individual_minifigures"."figure",
"bricktracker_individual_minifigures"."quantity",
"bricktracker_individual_minifigures"."description",
"bricktracker_individual_minifigures"."storage",
"bricktracker_individual_minifigures"."purchase_location",
"bricktracker_individual_minifigures"."purchase_date",
"bricktracker_individual_minifigures"."purchase_price",
"rebrickable_minifigures"."number",
"rebrickable_minifigures"."name",
"rebrickable_minifigures"."image",
"rebrickable_minifigures"."number_of_parts",
0 AS "total_missing",
0 AS "total_damaged"{% if owners %},
{{ owners }}{% endif %}{% if statuses %},
{{ statuses }}{% endif %}{% if tags %},
{{ tags }}{% endif %}
FROM "bricktracker_individual_minifigures"
INNER JOIN "rebrickable_minifigures"
ON "bricktracker_individual_minifigures"."figure" = "rebrickable_minifigures"."figure"
-- LEFT JOINs for metadata (owners, statuses, tags use separate dynamic column tables)
LEFT JOIN "bricktracker_set_owners"
ON "bricktracker_individual_minifigures"."id" = "bricktracker_set_owners"."id"
LEFT JOIN "bricktracker_set_statuses"
ON "bricktracker_individual_minifigures"."id" = "bricktracker_set_statuses"."id"
LEFT JOIN "bricktracker_set_tags"
ON "bricktracker_individual_minifigures"."id" = "bricktracker_set_tags"."id"
{% if order %}
ORDER BY {{ order }}
{% endif %}
{% if limit %}
LIMIT {{ limit }}
{% endif %}
{% if offset %}
OFFSET {{ offset }}
{% endif %}
@@ -0,0 +1,50 @@
-- Get all individual minifigure instances for a specific purchase location
SELECT
"bricktracker_individual_minifigures"."id",
"bricktracker_individual_minifigures"."figure",
"bricktracker_individual_minifigures"."quantity",
"bricktracker_individual_minifigures"."description",
"bricktracker_individual_minifigures"."storage",
"bricktracker_individual_minifigures"."purchase_location",
"bricktracker_individual_minifigures"."purchase_date",
"bricktracker_individual_minifigures"."purchase_price",
"rebrickable_minifigures"."number",
"rebrickable_minifigures"."name",
"rebrickable_minifigures"."image",
"rebrickable_minifigures"."number_of_parts",
"storage_meta"."name" AS "storage_name",
"purchase_meta"."name" AS "purchase_location_name",
IFNULL("problem_join"."total_missing", 0) AS "total_missing",
IFNULL("problem_join"."total_damaged", 0) AS "total_damaged"
FROM "bricktracker_individual_minifigures"
INNER JOIN "rebrickable_minifigures"
ON "bricktracker_individual_minifigures"."figure" = "rebrickable_minifigures"."figure"
LEFT JOIN "bricktracker_metadata_storages" AS "storage_meta"
ON "bricktracker_individual_minifigures"."storage" = "storage_meta"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations" AS "purchase_meta"
ON "bricktracker_individual_minifigures"."purchase_location" = "purchase_meta"."id"
LEFT JOIN (
SELECT
"bricktracker_individual_minifigure_parts"."id",
SUM("bricktracker_individual_minifigure_parts"."missing") AS "total_missing",
SUM("bricktracker_individual_minifigure_parts"."damaged") AS "total_damaged"
FROM "bricktracker_individual_minifigure_parts"
GROUP BY "bricktracker_individual_minifigure_parts"."id"
) "problem_join"
ON "bricktracker_individual_minifigures"."id" = "problem_join"."id"
WHERE "bricktracker_individual_minifigures"."purchase_location" IS NOT DISTINCT FROM :purchase_location
{% if order %}
ORDER BY {{ order }}
{% else %}
ORDER BY "bricktracker_individual_minifigures"."rowid" DESC
{% endif %}
{% if limit %}
LIMIT {{ limit }}
{% endif %}
@@ -0,0 +1,50 @@
-- Get all individual minifigure instances for a specific storage location
SELECT
"bricktracker_individual_minifigures"."id",
"bricktracker_individual_minifigures"."figure",
"bricktracker_individual_minifigures"."quantity",
"bricktracker_individual_minifigures"."description",
"bricktracker_individual_minifigures"."storage",
"bricktracker_individual_minifigures"."purchase_location",
"bricktracker_individual_minifigures"."purchase_date",
"bricktracker_individual_minifigures"."purchase_price",
"rebrickable_minifigures"."number",
"rebrickable_minifigures"."name",
"rebrickable_minifigures"."image",
"rebrickable_minifigures"."number_of_parts",
"storage_meta"."name" AS "storage_name",
"purchase_meta"."name" AS "purchase_location_name",
IFNULL("problem_join"."total_missing", 0) AS "total_missing",
IFNULL("problem_join"."total_damaged", 0) AS "total_damaged"
FROM "bricktracker_individual_minifigures"
INNER JOIN "rebrickable_minifigures"
ON "bricktracker_individual_minifigures"."figure" = "rebrickable_minifigures"."figure"
LEFT JOIN "bricktracker_metadata_storages" AS "storage_meta"
ON "bricktracker_individual_minifigures"."storage" = "storage_meta"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations" AS "purchase_meta"
ON "bricktracker_individual_minifigures"."purchase_location" = "purchase_meta"."id"
LEFT JOIN (
SELECT
"bricktracker_individual_minifigure_parts"."id",
SUM("bricktracker_individual_minifigure_parts"."missing") AS "total_missing",
SUM("bricktracker_individual_minifigure_parts"."damaged") AS "total_damaged"
FROM "bricktracker_individual_minifigure_parts"
GROUP BY "bricktracker_individual_minifigure_parts"."id"
) "problem_join"
ON "bricktracker_individual_minifigures"."id" = "problem_join"."id"
WHERE "bricktracker_individual_minifigures"."storage" IS NOT DISTINCT FROM :storage
{% if order %}
ORDER BY {{ order }}
{% else %}
ORDER BY "bricktracker_individual_minifigures"."rowid" DESC
{% endif %}
{% if limit %}
LIMIT {{ limit }}
{% endif %}
@@ -0,0 +1,50 @@
-- Get all individual minifigure instances without storage
SELECT
"bricktracker_individual_minifigures"."id",
"bricktracker_individual_minifigures"."figure",
"bricktracker_individual_minifigures"."quantity",
"bricktracker_individual_minifigures"."description",
"bricktracker_individual_minifigures"."storage",
"bricktracker_individual_minifigures"."purchase_location",
"bricktracker_individual_minifigures"."purchase_date",
"bricktracker_individual_minifigures"."purchase_price",
"rebrickable_minifigures"."number",
"rebrickable_minifigures"."name",
"rebrickable_minifigures"."image",
"rebrickable_minifigures"."number_of_parts",
"storage_meta"."name" AS "storage_name",
"purchase_meta"."name" AS "purchase_location_name",
IFNULL("problem_join"."total_missing", 0) AS "total_missing",
IFNULL("problem_join"."total_damaged", 0) AS "total_damaged"
FROM "bricktracker_individual_minifigures"
INNER JOIN "rebrickable_minifigures"
ON "bricktracker_individual_minifigures"."figure" = "rebrickable_minifigures"."figure"
LEFT JOIN "bricktracker_metadata_storages" AS "storage_meta"
ON "bricktracker_individual_minifigures"."storage" = "storage_meta"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations" AS "purchase_meta"
ON "bricktracker_individual_minifigures"."purchase_location" = "purchase_meta"."id"
LEFT JOIN (
SELECT
"bricktracker_individual_minifigure_parts"."id",
SUM("bricktracker_individual_minifigure_parts"."missing") AS "total_missing",
SUM("bricktracker_individual_minifigure_parts"."damaged") AS "total_damaged"
FROM "bricktracker_individual_minifigure_parts"
GROUP BY "bricktracker_individual_minifigure_parts"."id"
) "problem_join"
ON "bricktracker_individual_minifigures"."id" = "problem_join"."id"
WHERE "bricktracker_individual_minifigures"."storage" IS NULL
{% if order %}
ORDER BY {{ order }}
{% else %}
ORDER BY "bricktracker_individual_minifigures"."rowid" DESC
{% endif %}
{% if limit %}
LIMIT {{ limit }}
{% endif %}
@@ -0,0 +1,23 @@
INSERT OR IGNORE INTO "bricktracker_individual_minifigure_parts" (
"id",
"part",
"color",
"spare",
"quantity",
"element",
"rebrickable_inventory",
"missing",
"damaged",
"checked"
) VALUES (
:id,
:part,
:color,
:spare,
:quantity,
:element,
:rebrickable_inventory,
0,
0,
0
)
@@ -0,0 +1,38 @@
-- Query parts for a specific individual minifigure instance
SELECT
"bricktracker_individual_minifigure_parts"."id",
"bricktracker_individual_minifigures"."figure",
"bricktracker_individual_minifigure_parts"."part",
"bricktracker_individual_minifigure_parts"."color",
"bricktracker_individual_minifigure_parts"."spare",
"bricktracker_individual_minifigure_parts"."quantity",
"bricktracker_individual_minifigure_parts"."element",
"bricktracker_individual_minifigure_parts"."missing" AS "total_missing",
"bricktracker_individual_minifigure_parts"."damaged" AS "total_damaged",
"bricktracker_individual_minifigure_parts"."checked",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."bricklink_color_id",
"rebrickable_parts"."bricklink_color_name",
"rebrickable_parts"."bricklink_part_num",
"rebrickable_parts"."name",
"rebrickable_parts"."image",
"rebrickable_parts"."image_id",
"rebrickable_parts"."url",
"rebrickable_parts"."print",
NULL AS "total_quantity",
NULL AS "total_spare",
NULL AS "total_sets",
NULL AS "total_minifigures"
FROM "bricktracker_individual_minifigure_parts"
INNER JOIN "bricktracker_individual_minifigures"
ON "bricktracker_individual_minifigure_parts"."id" = "bricktracker_individual_minifigures"."id"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_minifigure_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_minifigure_parts"."color" = "rebrickable_parts"."color_id"
WHERE "bricktracker_individual_minifigure_parts"."id" IS NOT DISTINCT FROM :id
{% if order %}
ORDER BY {{ order | replace('"combined"', '"bricktracker_individual_minifigure_parts"') | replace('"bricktracker_parts"', '"bricktracker_individual_minifigure_parts"') }}
{% endif %}
@@ -0,0 +1,33 @@
-- Select a specific part from an individual minifigure instance
SELECT
"bricktracker_individual_minifigure_parts"."id",
"bricktracker_individual_minifigures"."figure",
"bricktracker_individual_minifigure_parts"."part",
"bricktracker_individual_minifigure_parts"."color",
"bricktracker_individual_minifigure_parts"."spare",
"bricktracker_individual_minifigure_parts"."quantity",
"bricktracker_individual_minifigure_parts"."element",
"bricktracker_individual_minifigure_parts"."missing",
"bricktracker_individual_minifigure_parts"."damaged",
"bricktracker_individual_minifigure_parts"."checked",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."bricklink_color_id",
"rebrickable_parts"."bricklink_color_name",
"rebrickable_parts"."bricklink_part_num",
"rebrickable_parts"."name",
"rebrickable_parts"."image",
"rebrickable_parts"."image_id",
"rebrickable_parts"."url",
"rebrickable_parts"."print"
FROM "bricktracker_individual_minifigure_parts"
INNER JOIN "bricktracker_individual_minifigures"
ON "bricktracker_individual_minifigure_parts"."id" = "bricktracker_individual_minifigures"."id"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_minifigure_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_minifigure_parts"."color" = "rebrickable_parts"."color_id"
WHERE "bricktracker_individual_minifigure_parts"."id" IS NOT DISTINCT FROM :id
AND "bricktracker_individual_minifigure_parts"."part" IS NOT DISTINCT FROM :part
AND "bricktracker_individual_minifigure_parts"."color" IS NOT DISTINCT FROM :color
AND "bricktracker_individual_minifigure_parts"."spare" IS NOT DISTINCT FROM :spare
@@ -0,0 +1,6 @@
UPDATE "bricktracker_individual_minifigure_parts"
SET "checked" = :checked
WHERE "bricktracker_individual_minifigure_parts"."id" IS NOT DISTINCT FROM :id
AND "bricktracker_individual_minifigure_parts"."part" IS NOT DISTINCT FROM :part
AND "bricktracker_individual_minifigure_parts"."color" IS NOT DISTINCT FROM :color
AND "bricktracker_individual_minifigure_parts"."spare" IS NOT DISTINCT FROM :spare
@@ -0,0 +1,6 @@
UPDATE "bricktracker_individual_minifigure_parts"
SET "damaged" = :damaged
WHERE "bricktracker_individual_minifigure_parts"."id" IS NOT DISTINCT FROM :id
AND "bricktracker_individual_minifigure_parts"."part" IS NOT DISTINCT FROM :part
AND "bricktracker_individual_minifigure_parts"."color" IS NOT DISTINCT FROM :color
AND "bricktracker_individual_minifigure_parts"."spare" IS NOT DISTINCT FROM :spare
@@ -0,0 +1,6 @@
UPDATE "bricktracker_individual_minifigure_parts"
SET "missing" = :missing
WHERE "bricktracker_individual_minifigure_parts"."id" IS NOT DISTINCT FROM :id
AND "bricktracker_individual_minifigure_parts"."part" IS NOT DISTINCT FROM :part
AND "bricktracker_individual_minifigure_parts"."color" IS NOT DISTINCT FROM :color
AND "bricktracker_individual_minifigure_parts"."spare" IS NOT DISTINCT FROM :spare
@@ -0,0 +1,52 @@
-- Get a specific individual minifigure instance by ID
SELECT
"bricktracker_individual_minifigures"."id",
"bricktracker_individual_minifigures"."figure",
"bricktracker_individual_minifigures"."quantity",
"bricktracker_individual_minifigures"."description",
"bricktracker_individual_minifigures"."storage",
"bricktracker_individual_minifigures"."purchase_location",
"bricktracker_individual_minifigures"."purchase_date",
"bricktracker_individual_minifigures"."purchase_price",
"rebrickable_minifigures"."number",
"rebrickable_minifigures"."name",
"rebrickable_minifigures"."image",
"rebrickable_minifigures"."number_of_parts",
"storage_meta"."name" AS "storage_name",
"purchase_meta"."name" AS "purchase_location_name",
IFNULL("problem_join"."total_missing", 0) AS "total_missing",
IFNULL("problem_join"."total_damaged", 0) AS "total_damaged"{% if owners %},
{{ owners }}{% endif %}{% if statuses %},
{{ statuses }}{% endif %}{% if tags %},
{{ tags }}{% endif %}
FROM "bricktracker_individual_minifigures"
INNER JOIN "rebrickable_minifigures"
ON "bricktracker_individual_minifigures"."figure" = "rebrickable_minifigures"."figure"
LEFT JOIN "bricktracker_metadata_storages" AS "storage_meta"
ON "bricktracker_individual_minifigures"."storage" = "storage_meta"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations" AS "purchase_meta"
ON "bricktracker_individual_minifigures"."purchase_location" = "purchase_meta"."id"
LEFT JOIN "bricktracker_set_owners"
ON "bricktracker_individual_minifigures"."id" IS NOT DISTINCT FROM "bricktracker_set_owners"."id"
LEFT JOIN "bricktracker_set_statuses"
ON "bricktracker_individual_minifigures"."id" IS NOT DISTINCT FROM "bricktracker_set_statuses"."id"
LEFT JOIN "bricktracker_set_tags"
ON "bricktracker_individual_minifigures"."id" IS NOT DISTINCT FROM "bricktracker_set_tags"."id"
LEFT JOIN (
SELECT
"bricktracker_individual_minifigure_parts"."id",
SUM("bricktracker_individual_minifigure_parts"."missing") AS "total_missing",
SUM("bricktracker_individual_minifigure_parts"."damaged") AS "total_damaged"
FROM "bricktracker_individual_minifigure_parts"
GROUP BY "bricktracker_individual_minifigure_parts"."id"
) "problem_join"
ON "bricktracker_individual_minifigures"."id" = "problem_join"."id"
WHERE "bricktracker_individual_minifigures"."id" = :id
@@ -0,0 +1,54 @@
-- Get all individual minifigure instances for a specific figure
SELECT
"bricktracker_individual_minifigures"."id",
"bricktracker_individual_minifigures"."figure",
"bricktracker_individual_minifigures"."quantity",
"bricktracker_individual_minifigures"."description",
"bricktracker_individual_minifigures"."storage",
"bricktracker_individual_minifigures"."purchase_location",
"bricktracker_individual_minifigures"."purchase_date",
"bricktracker_individual_minifigures"."purchase_price",
"rebrickable_minifigures"."number",
"rebrickable_minifigures"."name",
"rebrickable_minifigures"."image",
"rebrickable_minifigures"."number_of_parts",
"storage_meta"."name" AS "storage_name",
"purchase_meta"."name" AS "purchase_location_name",
{{ owners }},
{{ statuses }},
{{ tags }},
IFNULL("problem_join"."total_missing", 0) AS "total_missing",
IFNULL("problem_join"."total_damaged", 0) AS "total_damaged"
FROM "bricktracker_individual_minifigures"
INNER JOIN "rebrickable_minifigures"
ON "bricktracker_individual_minifigures"."figure" = "rebrickable_minifigures"."figure"
LEFT JOIN "bricktracker_metadata_storages" AS "storage_meta"
ON "bricktracker_individual_minifigures"."storage" = "storage_meta"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations" AS "purchase_meta"
ON "bricktracker_individual_minifigures"."purchase_location" = "purchase_meta"."id"
LEFT JOIN "bricktracker_set_owners"
ON "bricktracker_individual_minifigures"."id" = "bricktracker_set_owners"."id"
LEFT JOIN "bricktracker_set_statuses"
ON "bricktracker_individual_minifigures"."id" = "bricktracker_set_statuses"."id"
LEFT JOIN "bricktracker_set_tags"
ON "bricktracker_individual_minifigures"."id" = "bricktracker_set_tags"."id"
LEFT JOIN (
SELECT
"bricktracker_individual_minifigure_parts"."id",
SUM("bricktracker_individual_minifigure_parts"."missing") AS "total_missing",
SUM("bricktracker_individual_minifigure_parts"."damaged") AS "total_damaged"
FROM "bricktracker_individual_minifigure_parts"
GROUP BY "bricktracker_individual_minifigure_parts"."id"
) "problem_join"
ON "bricktracker_individual_minifigures"."id" = "problem_join"."id"
WHERE "bricktracker_individual_minifigures"."figure" = :figure
ORDER BY "bricktracker_individual_minifigures"."rowid" DESC
@@ -0,0 +1,9 @@
UPDATE "bricktracker_individual_minifigures"
SET
"quantity" = :quantity,
"description" = :description,
"storage" = :storage,
"purchase_location" = :purchase_location,
"purchase_date" = :purchase_date,
"purchase_price" = :purchase_price
WHERE "id" = :id
@@ -0,0 +1,17 @@
-- A bit unsafe as it does not use a prepared statement but it
-- should not be possible to inject anything through the {{ id }} context
BEGIN TRANSACTION;
-- Delete metadata from consolidated tables
DELETE FROM "bricktracker_set_owners"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
DELETE FROM "bricktracker_set_tags"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
-- Delete the individual part itself
DELETE FROM "bricktracker_individual_parts"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
COMMIT;
@@ -0,0 +1,30 @@
-- Insert a new individual part
INSERT INTO "bricktracker_individual_parts" (
"id",
"part",
"color",
"quantity",
"missing",
"damaged",
"checked",
"description",
"lot_id",
"storage",
"purchase_location",
"purchase_date",
"purchase_price"
) VALUES (
:id,
:part,
:color,
:quantity,
:missing,
:damaged,
:checked,
:description,
:lot_id,
:storage,
:purchase_location,
:purchase_date,
:purchase_price
)
@@ -0,0 +1,30 @@
-- Insert an individual part that belongs to a lot
INSERT INTO "bricktracker_individual_parts" (
"id",
"part",
"color",
"quantity",
"missing",
"damaged",
"checked",
"description",
"storage",
"purchase_location",
"purchase_date",
"purchase_price",
"lot_id"
) VALUES (
:id,
:part,
:color,
:quantity,
0,
0,
0,
NULL,
NULL,
NULL,
NULL,
NULL,
:lot_id
)
@@ -0,0 +1,42 @@
-- List all individual parts
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."lot_id",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name" AS "part_name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."category",
"rebrickable_parts"."image",
"rebrickable_parts"."image_id",
"rebrickable_parts"."url" AS "part_url",
"rebrickable_parts"."bricklink_part_num",
"rebrickable_parts"."bricklink_color_id",
"rebrickable_parts"."bricklink_color_name"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
{% if order %}
ORDER BY {{ order }}
{% endif %}
{% if limit %}
LIMIT {{ limit }}
{% endif %}
{% if offset %}
OFFSET {{ offset }}
{% endif %}
@@ -0,0 +1,31 @@
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."image",
"rebrickable_parts"."url",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_parts"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_parts"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
WHERE "bricktracker_individual_parts"."color" = :color
ORDER BY "bricktracker_individual_parts"."part"
@@ -0,0 +1,31 @@
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."image",
"rebrickable_parts"."url",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_parts"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_parts"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
WHERE "bricktracker_individual_parts"."part" = :part
ORDER BY "bricktracker_individual_parts"."color"
@@ -0,0 +1,34 @@
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."lot_id",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."image",
"rebrickable_parts"."url",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_parts"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_parts"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
WHERE "bricktracker_individual_parts"."part" = :part
AND "bricktracker_individual_parts"."color" = :color
AND "bricktracker_individual_parts"."lot_id" IS NULL
ORDER BY "bricktracker_individual_parts"."id"
@@ -0,0 +1,31 @@
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."image",
"rebrickable_parts"."url",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_parts"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_parts"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
WHERE "bricktracker_individual_parts"."storage" = :storage
ORDER BY "bricktracker_individual_parts"."part", "bricktracker_individual_parts"."color"
@@ -0,0 +1,32 @@
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."image",
"rebrickable_parts"."url",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_parts"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_parts"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
WHERE "bricktracker_individual_parts"."missing" > 0
OR "bricktracker_individual_parts"."damaged" > 0
ORDER BY "bricktracker_individual_parts"."part", "bricktracker_individual_parts"."color"
@@ -0,0 +1,32 @@
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."image",
"rebrickable_parts"."url",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_parts"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_parts"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
WHERE "bricktracker_individual_parts"."purchase_location" IS NOT DISTINCT FROM :purchase_location
AND "bricktracker_individual_parts"."lot_id" IS NULL
ORDER BY "bricktracker_individual_parts"."part", "bricktracker_individual_parts"."color"
@@ -0,0 +1,31 @@
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."image",
"rebrickable_parts"."url",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_parts"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_parts"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
WHERE "bricktracker_individual_parts"."storage" IS NOT DISTINCT FROM :storage
ORDER BY "bricktracker_individual_parts"."part", "bricktracker_individual_parts"."color"
@@ -0,0 +1,31 @@
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."image",
"rebrickable_parts"."url",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_parts"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_parts"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
WHERE "bricktracker_individual_parts"."storage" IS NULL
ORDER BY "bricktracker_individual_parts"."part", "bricktracker_individual_parts"."color"
@@ -0,0 +1,44 @@
-- Select a specific individual part by UUID
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."lot_id",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"rebrickable_parts"."name" AS "part_name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."category",
"rebrickable_parts"."image",
"rebrickable_parts"."image_id",
"rebrickable_parts"."url",
"rebrickable_parts"."bricklink_part_num",
"rebrickable_parts"."bricklink_color_id",
"rebrickable_parts"."bricklink_color_name"
{% if owners %},{{ owners }}{% endif %}
{% if statuses %},{{ statuses }}{% endif %}
{% if tags %},{{ tags }}{% endif %}
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
LEFT JOIN "bricktracker_set_owners"
ON "bricktracker_individual_parts"."id" IS NOT DISTINCT FROM "bricktracker_set_owners"."id"
LEFT JOIN "bricktracker_set_statuses"
ON "bricktracker_individual_parts"."id" IS NOT DISTINCT FROM "bricktracker_set_statuses"."id"
LEFT JOIN "bricktracker_set_tags"
ON "bricktracker_individual_parts"."id" IS NOT DISTINCT FROM "bricktracker_set_tags"."id"
WHERE "bricktracker_individual_parts"."id" = :id;
@@ -0,0 +1,3 @@
UPDATE "bricktracker_individual_parts"
SET "checked" = :checked
WHERE "id" = :id
@@ -0,0 +1,3 @@
UPDATE "bricktracker_individual_parts"
SET "damaged" = :damaged
WHERE "id" = :id
@@ -0,0 +1,4 @@
-- Update description for an individual part
UPDATE "bricktracker_individual_parts"
SET "description" = :description
WHERE "id" = :id;
@@ -0,0 +1,4 @@
-- Update a specific field in bricktracker_individual_parts
UPDATE "bricktracker_individual_parts"
SET "{{ field }}" = :value
WHERE "id" = :id
@@ -0,0 +1,3 @@
UPDATE "bricktracker_individual_parts"
SET "missing" = :missing
WHERE "id" = :id
@@ -0,0 +1,4 @@
-- Update quantity for an individual part
UPDATE "bricktracker_individual_parts"
SET "quantity" = :quantity
WHERE "id" = :id;
@@ -0,0 +1,9 @@
UPDATE "bricktracker_individual_parts"
SET
"quantity" = :quantity,
"description" = :description,
"storage" = :storage,
"purchase_location" = :purchase_location,
"purchase_date" = :purchase_date,
"purchase_price" = :purchase_price
WHERE "id" = :id
@@ -0,0 +1,22 @@
-- A bit unsafe as it does not use a prepared statement but it
-- should not be possible to inject anything through the {{ id }} context
BEGIN TRANSACTION;
-- Delete all individual parts associated with this lot
DELETE FROM "bricktracker_individual_parts"
WHERE "lot_id" IS NOT DISTINCT FROM '{{ id }}';
-- Delete lot owners (using consolidated metadata table)
DELETE FROM "bricktracker_set_owners"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
-- Delete lot tags (using consolidated metadata table)
DELETE FROM "bricktracker_set_tags"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
-- Delete the lot itself
DELETE FROM "bricktracker_individual_part_lots"
WHERE "id" IS NOT DISTINCT FROM '{{ id }}';
COMMIT;
@@ -0,0 +1,19 @@
INSERT INTO "bricktracker_individual_part_lots" (
"id",
"name",
"description",
"created_date",
"storage",
"purchase_location",
"purchase_date",
"purchase_price"
) VALUES (
:id,
:name,
:description,
:created_date,
:storage,
:purchase_location,
:purchase_date,
:purchase_price
)
@@ -0,0 +1,21 @@
SELECT
"bricktracker_individual_part_lots"."id",
"bricktracker_individual_part_lots"."name",
"bricktracker_individual_part_lots"."description",
"bricktracker_individual_part_lots"."created_date",
"bricktracker_individual_part_lots"."storage",
"bricktracker_individual_part_lots"."purchase_location",
"bricktracker_individual_part_lots"."purchase_date",
"bricktracker_individual_part_lots"."purchase_price",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name",
COUNT("bricktracker_individual_parts"."id") AS "part_count"
FROM "bricktracker_individual_part_lots"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_part_lots"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_part_lots"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
LEFT JOIN "bricktracker_individual_parts"
ON "bricktracker_individual_part_lots"."id" = "bricktracker_individual_parts"."lot_id"
GROUP BY "bricktracker_individual_part_lots"."id"
ORDER BY "bricktracker_individual_part_lots"."created_date" DESC
@@ -0,0 +1,23 @@
SELECT DISTINCT
"bricktracker_individual_part_lots"."id",
"bricktracker_individual_part_lots"."name",
"bricktracker_individual_part_lots"."description",
"bricktracker_individual_part_lots"."created_date",
"bricktracker_individual_part_lots"."storage",
"bricktracker_individual_part_lots"."purchase_location",
"bricktracker_individual_part_lots"."purchase_date",
"bricktracker_individual_part_lots"."purchase_price",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name",
COUNT("bricktracker_individual_parts"."id") AS "part_count"
FROM "bricktracker_individual_part_lots"
INNER JOIN "bricktracker_individual_parts"
ON "bricktracker_individual_part_lots"."id" = "bricktracker_individual_parts"."lot_id"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_part_lots"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_part_lots"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
WHERE "bricktracker_individual_parts"."part" = :part
AND "bricktracker_individual_parts"."color" = :color
GROUP BY "bricktracker_individual_part_lots"."id"
ORDER BY "bricktracker_individual_part_lots"."created_date" DESC
@@ -0,0 +1,22 @@
SELECT
"bricktracker_individual_part_lots"."id",
"bricktracker_individual_part_lots"."name",
"bricktracker_individual_part_lots"."description",
"bricktracker_individual_part_lots"."created_date",
"bricktracker_individual_part_lots"."storage",
"bricktracker_individual_part_lots"."purchase_location",
"bricktracker_individual_part_lots"."purchase_date",
"bricktracker_individual_part_lots"."purchase_price",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name",
COUNT("bricktracker_individual_parts"."id") AS "part_count"
FROM "bricktracker_individual_part_lots"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_part_lots"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_part_lots"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
LEFT JOIN "bricktracker_individual_parts"
ON "bricktracker_individual_part_lots"."id" = "bricktracker_individual_parts"."lot_id"
WHERE "bricktracker_individual_part_lots"."storage" = :storage
GROUP BY "bricktracker_individual_part_lots"."id"
ORDER BY "bricktracker_individual_part_lots"."created_date" DESC
@@ -0,0 +1,26 @@
SELECT
"bricktracker_individual_parts"."id",
"bricktracker_individual_parts"."part",
"bricktracker_individual_parts"."color",
"bricktracker_individual_parts"."quantity",
"bricktracker_individual_parts"."missing",
"bricktracker_individual_parts"."damaged",
"bricktracker_individual_parts"."checked",
"bricktracker_individual_parts"."description",
"bricktracker_individual_parts"."storage",
"bricktracker_individual_parts"."purchase_location",
"bricktracker_individual_parts"."purchase_date",
"bricktracker_individual_parts"."purchase_price",
"bricktracker_individual_parts"."lot_id",
"rebrickable_parts"."name",
"rebrickable_parts"."color_name",
"rebrickable_parts"."color_rgb",
"rebrickable_parts"."color_transparent",
"rebrickable_parts"."image",
"rebrickable_parts"."url"
FROM "bricktracker_individual_parts"
INNER JOIN "rebrickable_parts"
ON "bricktracker_individual_parts"."part" = "rebrickable_parts"."part"
AND "bricktracker_individual_parts"."color" = "rebrickable_parts"."color_id"
WHERE "bricktracker_individual_parts"."lot_id" = :lot_id
ORDER BY "rebrickable_parts"."name" ASC, "bricktracker_individual_parts"."color" ASC
@@ -0,0 +1,23 @@
SELECT
"bricktracker_individual_part_lots"."id",
"bricktracker_individual_part_lots"."name",
"bricktracker_individual_part_lots"."description",
"bricktracker_individual_part_lots"."created_date",
"bricktracker_individual_part_lots"."storage",
"bricktracker_individual_part_lots"."purchase_location",
"bricktracker_individual_part_lots"."purchase_date",
"bricktracker_individual_part_lots"."purchase_price",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name",
COUNT("bricktracker_individual_parts"."id") AS "part_count"
FROM "bricktracker_individual_part_lots"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_part_lots"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_part_lots"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
INNER JOIN "bricktracker_individual_parts"
ON "bricktracker_individual_part_lots"."id" = "bricktracker_individual_parts"."lot_id"
WHERE "bricktracker_individual_parts"."missing" > 0
OR "bricktracker_individual_parts"."damaged" > 0
GROUP BY "bricktracker_individual_part_lots"."id"
ORDER BY "bricktracker_individual_part_lots"."created_date" DESC
@@ -0,0 +1,22 @@
SELECT
"bricktracker_individual_part_lots"."id",
"bricktracker_individual_part_lots"."name",
"bricktracker_individual_part_lots"."description",
"bricktracker_individual_part_lots"."created_date",
"bricktracker_individual_part_lots"."storage",
"bricktracker_individual_part_lots"."purchase_location",
"bricktracker_individual_part_lots"."purchase_date",
"bricktracker_individual_part_lots"."purchase_price",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name",
COUNT("bricktracker_individual_parts"."id") AS "part_count"
FROM "bricktracker_individual_part_lots"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_part_lots"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_part_lots"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
LEFT JOIN "bricktracker_individual_parts"
ON "bricktracker_individual_part_lots"."id" = "bricktracker_individual_parts"."lot_id"
WHERE "bricktracker_individual_part_lots"."purchase_location" IS NOT DISTINCT FROM :purchase_location
GROUP BY "bricktracker_individual_part_lots"."id"
ORDER BY "bricktracker_individual_part_lots"."created_date" DESC
@@ -0,0 +1,22 @@
SELECT
"bricktracker_individual_part_lots"."id",
"bricktracker_individual_part_lots"."name",
"bricktracker_individual_part_lots"."description",
"bricktracker_individual_part_lots"."created_date",
"bricktracker_individual_part_lots"."storage",
"bricktracker_individual_part_lots"."purchase_location",
"bricktracker_individual_part_lots"."purchase_date",
"bricktracker_individual_part_lots"."purchase_price",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name",
COUNT("bricktracker_individual_parts"."id") AS "part_count"
FROM "bricktracker_individual_part_lots"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_part_lots"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_part_lots"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
LEFT JOIN "bricktracker_individual_parts"
ON "bricktracker_individual_part_lots"."id" = "bricktracker_individual_parts"."lot_id"
WHERE "bricktracker_individual_part_lots"."storage" IS NOT DISTINCT FROM :storage
GROUP BY "bricktracker_individual_part_lots"."id"
ORDER BY "bricktracker_individual_part_lots"."created_date" DESC
@@ -0,0 +1,22 @@
SELECT
"bricktracker_individual_part_lots"."id",
"bricktracker_individual_part_lots"."name",
"bricktracker_individual_part_lots"."description",
"bricktracker_individual_part_lots"."created_date",
"bricktracker_individual_part_lots"."storage",
"bricktracker_individual_part_lots"."purchase_location",
"bricktracker_individual_part_lots"."purchase_date",
"bricktracker_individual_part_lots"."purchase_price",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name",
COUNT("bricktracker_individual_parts"."id") AS "part_count"
FROM "bricktracker_individual_part_lots"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_part_lots"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_part_lots"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
LEFT JOIN "bricktracker_individual_parts"
ON "bricktracker_individual_part_lots"."id" = "bricktracker_individual_parts"."lot_id"
WHERE "bricktracker_individual_part_lots"."storage" IS NULL
GROUP BY "bricktracker_individual_part_lots"."id"
ORDER BY "bricktracker_individual_part_lots"."created_date" DESC
@@ -0,0 +1,28 @@
SELECT
"bricktracker_individual_part_lots"."id",
"bricktracker_individual_part_lots"."name",
"bricktracker_individual_part_lots"."description",
"bricktracker_individual_part_lots"."created_date",
"bricktracker_individual_part_lots"."storage",
"bricktracker_individual_part_lots"."purchase_location",
"bricktracker_individual_part_lots"."purchase_date",
"bricktracker_individual_part_lots"."purchase_price",
"bricktracker_metadata_storages"."name" AS "storage_name",
"bricktracker_metadata_purchase_locations"."name" AS "purchase_location_name"
{% if owners %},{{ owners }}{% endif %}
{% if tags %},{{ tags }}{% endif %}
FROM "bricktracker_individual_part_lots"
LEFT JOIN "bricktracker_metadata_storages"
ON "bricktracker_individual_part_lots"."storage" IS NOT DISTINCT FROM "bricktracker_metadata_storages"."id"
LEFT JOIN "bricktracker_metadata_purchase_locations"
ON "bricktracker_individual_part_lots"."purchase_location" IS NOT DISTINCT FROM "bricktracker_metadata_purchase_locations"."id"
LEFT JOIN "bricktracker_set_owners"
ON "bricktracker_individual_part_lots"."id" IS NOT DISTINCT FROM "bricktracker_set_owners"."id"
-- Note: Part lots don't have statuses, only owners and tags
LEFT JOIN "bricktracker_set_tags"
ON "bricktracker_individual_part_lots"."id" IS NOT DISTINCT FROM "bricktracker_set_tags"."id"
WHERE "bricktracker_individual_part_lots"."id" = :id
@@ -0,0 +1,4 @@
-- Update individual part lot description
UPDATE "bricktracker_individual_part_lots"
SET "description" = :description
WHERE "id" = :id
@@ -0,0 +1,4 @@
-- Update individual part lot name
UPDATE "bricktracker_individual_part_lots"
SET "name" = :name
WHERE "id" = :id
@@ -0,0 +1,4 @@
-- Update individual part lot purchase date
UPDATE "bricktracker_individual_part_lots"
SET "purchase_date" = :purchase_date
WHERE "id" = :id
@@ -0,0 +1,4 @@
-- Update individual part lot purchase location
UPDATE "bricktracker_individual_part_lots"
SET "purchase_location" = :purchase_location
WHERE "id" = :id
@@ -0,0 +1,4 @@
-- Update individual part lot purchase price
UPDATE "bricktracker_individual_part_lots"
SET "purchase_price" = :purchase_price
WHERE "id" = :id
@@ -0,0 +1,4 @@
-- Update individual part lot storage
UPDATE "bricktracker_individual_part_lots"
SET "storage" = :storage
WHERE "id" = :id
+9
View File
@@ -0,0 +1,9 @@
-- description: Add checked field to bricktracker_parts table for part walkthrough tracking
BEGIN TRANSACTION;
-- Add checked field to the bricktracker_parts table
-- This allows users to track which parts they have checked during walkthroughs
ALTER TABLE "bricktracker_parts" ADD COLUMN "checked" BOOLEAN DEFAULT 0;
COMMIT;
+56
View File
@@ -0,0 +1,56 @@
-- description: Performance optimization indexes
-- High-impact composite index for problem parts aggregation
-- Used in set listings, statistics, and problem reports
CREATE INDEX IF NOT EXISTS idx_bricktracker_parts_id_missing_damaged
ON bricktracker_parts(id, missing, damaged);
-- Composite index for parts lookup by part and color
-- Used in part listings and filtering operations
CREATE INDEX IF NOT EXISTS idx_bricktracker_parts_part_color_spare
ON bricktracker_parts(part, color, spare);
-- Composite index for set storage filtering
-- Used in set listings filtered by storage location
CREATE INDEX IF NOT EXISTS idx_bricktracker_sets_set_storage
ON bricktracker_sets("set", storage);
-- Search optimization index for set names
-- Improves text search performance on set listings
CREATE INDEX IF NOT EXISTS idx_rebrickable_sets_name_lower
ON rebrickable_sets(LOWER(name));
-- Search optimization index for part names
-- Improves text search performance on part listings
CREATE INDEX IF NOT EXISTS idx_rebrickable_parts_name_lower
ON rebrickable_parts(LOWER(name));
-- Additional indexes for common join patterns
-- Set purchase filtering
CREATE INDEX IF NOT EXISTS idx_bricktracker_sets_purchase_location
ON bricktracker_sets(purchase_location);
-- Parts quantity filtering
CREATE INDEX IF NOT EXISTS idx_bricktracker_parts_quantity
ON bricktracker_parts(quantity);
-- Year-based filtering optimization
CREATE INDEX IF NOT EXISTS idx_rebrickable_sets_year
ON rebrickable_sets(year);
-- Theme-based filtering optimization
CREATE INDEX IF NOT EXISTS idx_rebrickable_sets_theme_id
ON rebrickable_sets(theme_id);
-- Rebrickable sets number and version for sorting
CREATE INDEX IF NOT EXISTS idx_rebrickable_sets_number_version
ON rebrickable_sets(number, version);
-- Purchase date filtering and sorting
CREATE INDEX IF NOT EXISTS idx_bricktracker_sets_purchase_date
ON bricktracker_sets(purchase_date);
-- Minifigures aggregation optimization
CREATE INDEX IF NOT EXISTS idx_bricktracker_minifigures_id_quantity
ON bricktracker_minifigures(id, quantity);
+58
View File
@@ -0,0 +1,58 @@
-- description: Change set number column from INTEGER to TEXT to support alphanumeric set numbers
-- Temporarily disable foreign key constraints for this migration
-- This is necessary because we're recreating a table that other tables reference
-- We verify integrity at the end to ensure safety
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
-- Create new table with TEXT number column
CREATE TABLE "rebrickable_sets_new" (
"set" TEXT NOT NULL,
"number" TEXT NOT NULL,
"version" INTEGER NOT NULL,
"name" TEXT NOT NULL,
"year" INTEGER NOT NULL,
"theme_id" INTEGER NOT NULL,
"number_of_parts" INTEGER NOT NULL,
"image" TEXT,
"url" TEXT,
"last_modified" TEXT,
PRIMARY KEY("set")
);
-- Copy all data from old table to new table
-- Cast INTEGER number to TEXT explicitly
INSERT INTO "rebrickable_sets_new"
SELECT
"set",
CAST("number" AS TEXT),
"version",
"name",
"year",
"theme_id",
"number_of_parts",
"image",
"url",
"last_modified"
FROM "rebrickable_sets";
-- Drop old table
DROP TABLE "rebrickable_sets";
-- Rename new table to original name
ALTER TABLE "rebrickable_sets_new" RENAME TO "rebrickable_sets";
-- Recreate the index
CREATE INDEX IF NOT EXISTS idx_rebrickable_sets_number_version
ON rebrickable_sets(number, version);
-- Verify foreign key integrity before committing
-- This ensures we haven't broken any references
PRAGMA foreign_key_check;
COMMIT;
-- Re-enable foreign key constraints
PRAGMA foreign_keys=ON;
+88
View File
@@ -0,0 +1,88 @@
-- description: Add individual minifigures and individual parts tables
-- Individual minifigures table - tracks individual minifigures not associated with sets
CREATE TABLE IF NOT EXISTS "bricktracker_individual_minifigures" (
"id" TEXT NOT NULL,
"figure" TEXT NOT NULL,
"quantity" INTEGER NOT NULL DEFAULT 1,
"description" TEXT,
"storage" TEXT, -- Storage bin location
"purchase_date" REAL, -- Purchase date
"purchase_location" TEXT, -- Purchase location
"purchase_price" REAL, -- Purchase price
PRIMARY KEY("id"),
FOREIGN KEY("figure") REFERENCES "rebrickable_minifigures"("figure"),
FOREIGN KEY("storage") REFERENCES "bricktracker_metadata_storages"("id"),
FOREIGN KEY("purchase_location") REFERENCES "bricktracker_metadata_purchase_locations"("id")
);
-- Metadata for individual minifigures: use bricktracker_set_owners, bricktracker_set_tags, bricktracker_set_statuses tables
-- Parts table for individual minifigures - tracks constituent parts
CREATE TABLE IF NOT EXISTS "bricktracker_individual_minifigure_parts" (
"id" TEXT NOT NULL,
"part" TEXT NOT NULL,
"color" INTEGER NOT NULL,
"spare" BOOLEAN NOT NULL,
"quantity" INTEGER NOT NULL,
"element" INTEGER,
"rebrickable_inventory" INTEGER NOT NULL,
"missing" INTEGER NOT NULL DEFAULT 0,
"damaged" INTEGER NOT NULL DEFAULT 0,
"checked" BOOLEAN DEFAULT 0,
PRIMARY KEY("id", "part", "color", "spare"),
FOREIGN KEY("id") REFERENCES "bricktracker_individual_minifigures"("id"),
FOREIGN KEY("part", "color") REFERENCES "rebrickable_parts"("part", "color_id")
);
-- Individual parts table - tracks individual parts not associated with sets
CREATE TABLE IF NOT EXISTS "bricktracker_individual_parts" (
"id" TEXT NOT NULL,
"part" TEXT NOT NULL,
"color" INTEGER NOT NULL,
"quantity" INTEGER NOT NULL DEFAULT 1,
"description" TEXT,
"storage" TEXT, -- Storage bin location
"purchase_date" REAL, -- Purchase date
"purchase_location" TEXT, -- Purchase location
"purchase_price" REAL, -- Purchase price
PRIMARY KEY("id"),
FOREIGN KEY("part", "color") REFERENCES "rebrickable_parts"("part", "color_id"),
FOREIGN KEY("storage") REFERENCES "bricktracker_metadata_storages"("id"),
FOREIGN KEY("purchase_location") REFERENCES "bricktracker_metadata_purchase_locations"("id")
);
-- Metadata for individual parts: use bricktracker_set_owners, bricktracker_set_tags, bricktracker_set_statuses tables
-- Indexes for individual minifigures
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_minifigures_figure
ON bricktracker_individual_minifigures(figure);
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_minifigures_storage
ON bricktracker_individual_minifigures(storage);
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_minifigures_purchase_location
ON bricktracker_individual_minifigures(purchase_location);
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_minifigures_purchase_date
ON bricktracker_individual_minifigures(purchase_date);
-- Indexes for individual minifigure parts
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_minifigure_parts_id_missing_damaged
ON bricktracker_individual_minifigure_parts(id, missing, damaged);
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_minifigure_parts_part_color
ON bricktracker_individual_minifigure_parts(part, color);
-- Indexes for individual parts
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_parts_part_color
ON bricktracker_individual_parts(part, color);
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_parts_storage
ON bricktracker_individual_parts(storage);
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_parts_purchase_location
ON bricktracker_individual_parts(purchase_location);
CREATE INDEX IF NOT EXISTS idx_bricktracker_individual_parts_purchase_date
ON bricktracker_individual_parts(purchase_date);

Some files were not shown because too many files have changed in this diff Show More