Compare commits
30 Commits
4131bc5d31
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
181ab5a8e7
|
|||
|
fd192310c7
|
|||
|
b73e0b4622
|
|||
|
0530c5d95f
|
|||
|
ce2e58b6bc
|
|||
|
ca462ac005
|
|||
|
e895beadb7
|
|||
|
615af97043
|
|||
|
595db869e5
|
|||
|
537b48dc22
|
|||
|
2c09745a9f
|
|||
|
beb7c5e337
|
|||
|
19705527a0
|
|||
|
9e22bd0e20
|
|||
|
d27d8655bb
|
|||
|
6d75ec60bf
|
|||
|
84a94933b3
|
|||
|
5e0e9f8a42
|
|||
|
083739fd4e
|
|||
|
4f174972e3
|
|||
|
f9f22ba42c
|
|||
|
7300773b96
|
|||
|
05c3687ab1
|
|||
|
aa65466a49
|
|||
|
454cfd688c
|
|||
|
1e3800cc16
|
|||
|
d37e9e821a
|
|||
|
de0dc58b99
|
|||
|
059825f169
|
|||
|
4b054ac9cc
|
11
.gitignore
vendored
Normal file
11
.gitignore
vendored
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
/gallery
|
||||||
|
/initialize.go
|
||||||
|
/public/mithril.js
|
||||||
|
|
||||||
|
/gallery.cflags
|
||||||
|
/gallery.config
|
||||||
|
/gallery.creator
|
||||||
|
/gallery.creator.user
|
||||||
|
/gallery.cxxflags
|
||||||
|
/gallery.files
|
||||||
|
/gallery.includes
|
||||||
14
README
14
README
@@ -1,14 +0,0 @@
|
|||||||
This is gallery software designed to maintain a shadow structure
|
|
||||||
of your filesystem, in which you can attach metadata to your media,
|
|
||||||
and query your collections in various ways.
|
|
||||||
|
|
||||||
All media is content-addressed by its SHA-1 hash value, and at your option
|
|
||||||
also perceptually hashed. Duplicate search is an essential feature.
|
|
||||||
|
|
||||||
Prerequisites: Go, ImageMagick, xdg-utils
|
|
||||||
|
|
||||||
The gallery is designed for simplicity, and easy interoperability.
|
|
||||||
sqlite3, curl, jq, and the filesystem will take you a long way.
|
|
||||||
|
|
||||||
The intended mode of use is running daily automated sync/thumbnail/dhash/tag
|
|
||||||
batches in a cron job, or from a system timer. See test.sh for usage hints.
|
|
||||||
39
README.adoc
Normal file
39
README.adoc
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
gallery
|
||||||
|
=======
|
||||||
|
|
||||||
|
This is gallery software designed to maintain a shadow structure
|
||||||
|
of your filesystem, in which you can attach metadata to your media,
|
||||||
|
and query your collections in various ways.
|
||||||
|
|
||||||
|
All media is content-addressed by its SHA-1 hash value, and at your option
|
||||||
|
also perceptually hashed. Duplicate search is an essential feature.
|
||||||
|
|
||||||
|
The gallery is designed for simplicity, and easy interoperability.
|
||||||
|
sqlite3, curl, jq, and the filesystem will take you a long way.
|
||||||
|
|
||||||
|
Prerequisites: Go, ImageMagick, xdg-utils
|
||||||
|
|
||||||
|
ImageMagick v7 is preferred, it doesn't shoot out of memory as often.
|
||||||
|
|
||||||
|
Getting it to work
|
||||||
|
------------------
|
||||||
|
# apt install build-essential git golang imagemagick xdg-utils
|
||||||
|
$ git clone https://git.janouch.name/p/gallery.git
|
||||||
|
$ cd gallery
|
||||||
|
$ make
|
||||||
|
$ ./gallery init G
|
||||||
|
$ ./gallery sync G ~/Pictures
|
||||||
|
$ ./gallery thumbnail G # parallelized, with memory limits
|
||||||
|
$ ./gallery -threads 1 thumbnail G # one thread only gets more memory
|
||||||
|
$ ./gallery dhash G
|
||||||
|
$ ./gallery web G :8080
|
||||||
|
|
||||||
|
The intended mode of use is running daily automated sync/thumbnail/dhash/tag
|
||||||
|
batches in a cron job, or from a systemd timer.
|
||||||
|
|
||||||
|
The _web_ command needs to see the _public_ directory,
|
||||||
|
and is friendly to reverse proxying.
|
||||||
|
|
||||||
|
Demo
|
||||||
|
----
|
||||||
|
https://holedigging.club/gallery/
|
||||||
@@ -29,8 +29,7 @@ if you plan on using the GPU-enabled options.
|
|||||||
$ ./download.sh
|
$ ./download.sh
|
||||||
$ build/deeptagger models/deepdanbooru-v3-20211112-sgd-e28.model image.jpg
|
$ build/deeptagger models/deepdanbooru-v3-20211112-sgd-e28.model image.jpg
|
||||||
|
|
||||||
Very little effort is made to make the project compatible with non-POSIX
|
The project requires a POSIX-compatible system to build.
|
||||||
systems.
|
|
||||||
|
|
||||||
Options
|
Options
|
||||||
-------
|
-------
|
||||||
@@ -47,7 +46,18 @@ Options
|
|||||||
--pipe::
|
--pipe::
|
||||||
Take input filenames from the standard input.
|
Take input filenames from the standard input.
|
||||||
--threshold 0.1::
|
--threshold 0.1::
|
||||||
Output weight threshold. Needs to be set very high on ML-Danbooru models.
|
Output weight threshold. Needs to be set higher on ML-Danbooru models.
|
||||||
|
|
||||||
|
Tagging galleries
|
||||||
|
-----------------
|
||||||
|
The appropriate invocation depends on your machine, and the chosen model.
|
||||||
|
Unless you have a powerful machine, or use a fast model, it may take forever.
|
||||||
|
|
||||||
|
$ find "$GALLERY/images" -type l \
|
||||||
|
| build/deeptagger --pipe -b 16 -t 0.5 \
|
||||||
|
models/ml_caformer_m36_dec-5-97527.model \
|
||||||
|
| sed 's|[^\t]*/||' \
|
||||||
|
| gallery tag "$GALLERY" caformer "ML-Danbooru CAFormer"
|
||||||
|
|
||||||
Model benchmarks (Linux)
|
Model benchmarks (Linux)
|
||||||
------------------------
|
------------------------
|
||||||
@@ -62,16 +72,17 @@ GPU inference
|
|||||||
[cols="<,>,>", options=header]
|
[cols="<,>,>", options=header]
|
||||||
|===
|
|===
|
||||||
|Model|Batch size|Time
|
|Model|Batch size|Time
|
||||||
|ML-Danbooru Caformer dec-5-97527|16|OOM
|
|
||||||
|WD v1.4 ViT v2 (batch)|16|19 s
|
|WD v1.4 ViT v2 (batch)|16|19 s
|
||||||
|DeepDanbooru|16|21 s
|
|DeepDanbooru|16|21 s
|
||||||
|WD v1.4 SwinV2 v2 (batch)|16|21 s
|
|WD v1.4 SwinV2 v2 (batch)|16|21 s
|
||||||
|
|ML-Danbooru CAFormer dec-5-97527|16|25 s
|
||||||
|WD v1.4 ViT v2 (batch)|4|27 s
|
|WD v1.4 ViT v2 (batch)|4|27 s
|
||||||
|WD v1.4 SwinV2 v2 (batch)|4|30 s
|
|WD v1.4 SwinV2 v2 (batch)|4|30 s
|
||||||
|DeepDanbooru|4|31 s
|
|DeepDanbooru|4|31 s
|
||||||
|ML-Danbooru TResNet-D 6-30000|16|31 s
|
|ML-Danbooru TResNet-D 6-30000|16|31 s
|
||||||
|WD v1.4 MOAT v2 (batch)|16|31 s
|
|WD v1.4 MOAT v2 (batch)|16|31 s
|
||||||
|WD v1.4 ConvNeXT v2 (batch)|16|32 s
|
|WD v1.4 ConvNeXT v2 (batch)|16|32 s
|
||||||
|
|ML-Danbooru CAFormer dec-5-97527|4|32 s
|
||||||
|WD v1.4 ConvNeXTV2 v2 (batch)|16|36 s
|
|WD v1.4 ConvNeXTV2 v2 (batch)|16|36 s
|
||||||
|ML-Danbooru TResNet-D 6-30000|4|39 s
|
|ML-Danbooru TResNet-D 6-30000|4|39 s
|
||||||
|WD v1.4 ConvNeXT v2 (batch)|4|39 s
|
|WD v1.4 ConvNeXT v2 (batch)|4|39 s
|
||||||
@@ -79,7 +90,7 @@ GPU inference
|
|||||||
|WD v1.4 ConvNeXTV2 v2 (batch)|4|43 s
|
|WD v1.4 ConvNeXTV2 v2 (batch)|4|43 s
|
||||||
|WD v1.4 ViT v2|1|43 s
|
|WD v1.4 ViT v2|1|43 s
|
||||||
|WD v1.4 ViT v2 (batch)|1|43 s
|
|WD v1.4 ViT v2 (batch)|1|43 s
|
||||||
|ML-Danbooru Caformer dec-5-97527|4|48 s
|
|ML-Danbooru CAFormer dec-5-97527|1|52 s
|
||||||
|DeepDanbooru|1|53 s
|
|DeepDanbooru|1|53 s
|
||||||
|WD v1.4 MOAT v2|1|53 s
|
|WD v1.4 MOAT v2|1|53 s
|
||||||
|WD v1.4 ConvNeXT v2|1|54 s
|
|WD v1.4 ConvNeXT v2|1|54 s
|
||||||
@@ -90,7 +101,6 @@ GPU inference
|
|||||||
|WD v1.4 ConvNeXTV2 v2|1|56 s
|
|WD v1.4 ConvNeXTV2 v2|1|56 s
|
||||||
|ML-Danbooru TResNet-D 6-30000|1|58 s
|
|ML-Danbooru TResNet-D 6-30000|1|58 s
|
||||||
|WD v1.4 ConvNeXTV2 v2 (batch)|1|58 s
|
|WD v1.4 ConvNeXTV2 v2 (batch)|1|58 s
|
||||||
|ML-Danbooru Caformer dec-5-97527|1|73 s
|
|
||||||
|===
|
|===
|
||||||
|
|
||||||
CPU inference
|
CPU inference
|
||||||
@@ -110,6 +120,7 @@ CPU inference
|
|||||||
|WD v1.4 ConvNeXTV2 v2|1|245 s
|
|WD v1.4 ConvNeXTV2 v2|1|245 s
|
||||||
|WD v1.4 ConvNeXTV2 v2 (batch)|4|268 s
|
|WD v1.4 ConvNeXTV2 v2 (batch)|4|268 s
|
||||||
|WD v1.4 ViT v2 (batch)|16|270 s
|
|WD v1.4 ViT v2 (batch)|16|270 s
|
||||||
|
|ML-Danbooru CAFormer dec-5-97527|4|270 s
|
||||||
|WD v1.4 ConvNeXT v2 (batch)|1|272 s
|
|WD v1.4 ConvNeXT v2 (batch)|1|272 s
|
||||||
|WD v1.4 SwinV2 v2 (batch)|4|277 s
|
|WD v1.4 SwinV2 v2 (batch)|4|277 s
|
||||||
|WD v1.4 ViT v2 (batch)|4|277 s
|
|WD v1.4 ViT v2 (batch)|4|277 s
|
||||||
@@ -117,6 +128,7 @@ CPU inference
|
|||||||
|WD v1.4 SwinV2 v2 (batch)|1|300 s
|
|WD v1.4 SwinV2 v2 (batch)|1|300 s
|
||||||
|WD v1.4 SwinV2 v2|1|302 s
|
|WD v1.4 SwinV2 v2|1|302 s
|
||||||
|WD v1.4 SwinV2 v2 (batch)|16|305 s
|
|WD v1.4 SwinV2 v2 (batch)|16|305 s
|
||||||
|
|ML-Danbooru CAFormer dec-5-97527|16|305 s
|
||||||
|WD v1.4 MOAT v2 (batch)|4|307 s
|
|WD v1.4 MOAT v2 (batch)|4|307 s
|
||||||
|WD v1.4 ViT v2|1|308 s
|
|WD v1.4 ViT v2|1|308 s
|
||||||
|WD v1.4 ViT v2 (batch)|1|311 s
|
|WD v1.4 ViT v2 (batch)|1|311 s
|
||||||
@@ -124,9 +136,7 @@ CPU inference
|
|||||||
|WD v1.4 MOAT v2|1|332 s
|
|WD v1.4 MOAT v2|1|332 s
|
||||||
|WD v1.4 MOAT v2 (batch)|16|335 s
|
|WD v1.4 MOAT v2 (batch)|16|335 s
|
||||||
|WD v1.4 MOAT v2 (batch)|1|339 s
|
|WD v1.4 MOAT v2 (batch)|1|339 s
|
||||||
|ML-Danbooru Caformer dec-5-97527|4|637 s
|
|ML-Danbooru CAFormer dec-5-97527|1|352 s
|
||||||
|ML-Danbooru Caformer dec-5-97527|16|689 s
|
|
||||||
|ML-Danbooru Caformer dec-5-97527|1|829 s
|
|
||||||
|===
|
|===
|
||||||
|
|
||||||
Model benchmarks (macOS)
|
Model benchmarks (macOS)
|
||||||
@@ -166,12 +176,12 @@ GPU inference
|
|||||||
|WD v1.4 ConvNeXTV2 v2 (batch)|1|160 s
|
|WD v1.4 ConvNeXTV2 v2 (batch)|1|160 s
|
||||||
|WD v1.4 MOAT v2 (batch)|1|165 s
|
|WD v1.4 MOAT v2 (batch)|1|165 s
|
||||||
|WD v1.4 SwinV2 v2|1|166 s
|
|WD v1.4 SwinV2 v2|1|166 s
|
||||||
|
|ML-Danbooru CAFormer dec-5-97527|1|263 s
|
||||||
|WD v1.4 ConvNeXT v2|1|273 s
|
|WD v1.4 ConvNeXT v2|1|273 s
|
||||||
|WD v1.4 MOAT v2|1|273 s
|
|WD v1.4 MOAT v2|1|273 s
|
||||||
|WD v1.4 ConvNeXTV2 v2|1|340 s
|
|WD v1.4 ConvNeXTV2 v2|1|340 s
|
||||||
|ML-Danbooru Caformer dec-5-97527|1|551 s
|
|ML-Danbooru CAFormer dec-5-97527|4|445 s
|
||||||
|ML-Danbooru Caformer dec-5-97527|4|swap hell
|
|ML-Danbooru CAFormer dec-5-97527|8|1790 s
|
||||||
|ML-Danbooru Caformer dec-5-97527|8|swap hell
|
|
||||||
|WD v1.4 MOAT v2 (batch)|4|kernel panic
|
|WD v1.4 MOAT v2 (batch)|4|kernel panic
|
||||||
|===
|
|===
|
||||||
|
|
||||||
@@ -189,11 +199,14 @@ CPU inference
|
|||||||
|WD v1.4 SwinV2 v2 (batch)|1|98 s
|
|WD v1.4 SwinV2 v2 (batch)|1|98 s
|
||||||
|ML-Danbooru TResNet-D 6-30000|4|99 s
|
|ML-Danbooru TResNet-D 6-30000|4|99 s
|
||||||
|WD v1.4 SwinV2 v2|1|99 s
|
|WD v1.4 SwinV2 v2|1|99 s
|
||||||
|
|ML-Danbooru CAFormer dec-5-97527|4|110 s
|
||||||
|
|ML-Danbooru CAFormer dec-5-97527|8|110 s
|
||||||
|WD v1.4 ViT v2 (batch)|4|111 s
|
|WD v1.4 ViT v2 (batch)|4|111 s
|
||||||
|WD v1.4 ViT v2 (batch)|8|111 s
|
|WD v1.4 ViT v2 (batch)|8|111 s
|
||||||
|WD v1.4 ViT v2 (batch)|1|113 s
|
|WD v1.4 ViT v2 (batch)|1|113 s
|
||||||
|WD v1.4 ViT v2|1|113 s
|
|WD v1.4 ViT v2|1|113 s
|
||||||
|ML-Danbooru TResNet-D 6-30000|1|118 s
|
|ML-Danbooru TResNet-D 6-30000|1|118 s
|
||||||
|
|ML-Danbooru CAFormer dec-5-97527|1|122 s
|
||||||
|WD v1.4 ConvNeXT v2 (batch)|8|124 s
|
|WD v1.4 ConvNeXT v2 (batch)|8|124 s
|
||||||
|WD v1.4 ConvNeXT v2 (batch)|4|125 s
|
|WD v1.4 ConvNeXT v2 (batch)|4|125 s
|
||||||
|WD v1.4 ConvNeXTV2 v2 (batch)|8|129 s
|
|WD v1.4 ConvNeXTV2 v2 (batch)|8|129 s
|
||||||
@@ -206,9 +219,6 @@ CPU inference
|
|||||||
|WD v1.4 MOAT v2 (batch)|1|156 s
|
|WD v1.4 MOAT v2 (batch)|1|156 s
|
||||||
|WD v1.4 MOAT v2|1|156 s
|
|WD v1.4 MOAT v2|1|156 s
|
||||||
|WD v1.4 ConvNeXTV2 v2 (batch)|1|157 s
|
|WD v1.4 ConvNeXTV2 v2 (batch)|1|157 s
|
||||||
|ML-Danbooru Caformer dec-5-97527|4|241 s
|
|
||||||
|ML-Danbooru Caformer dec-5-97527|8|241 s
|
|
||||||
|ML-Danbooru Caformer dec-5-97527|1|262 s
|
|
||||||
|===
|
|===
|
||||||
|
|
||||||
Comparison with WDMassTagger
|
Comparison with WDMassTagger
|
||||||
|
|||||||
@@ -28,11 +28,9 @@ run() {
|
|||||||
for model in models/*.model
|
for model in models/*.model
|
||||||
do
|
do
|
||||||
name=$(sed -n 's/^name=//p' "$model")
|
name=$(sed -n 's/^name=//p' "$model")
|
||||||
run "" 1 "$model" "$@"
|
for batch in 1 4 16
|
||||||
run "" 4 "$model" "$@"
|
do
|
||||||
run "" 16 "$model" "$@"
|
run "" $batch "$model" "$@"
|
||||||
|
run --cpu $batch "$model" "$@"
|
||||||
run --cpu 1 "$model" "$@"
|
done
|
||||||
run --cpu 4 "$model" "$@"
|
|
||||||
run --cpu 16 "$model" "$@"
|
|
||||||
done
|
done
|
||||||
|
|||||||
@@ -116,7 +116,8 @@ read_config(Config &config, const char *path)
|
|||||||
}
|
}
|
||||||
|
|
||||||
read_tags(
|
read_tags(
|
||||||
std::filesystem::path(path).replace_extension("tags"), config.tags);
|
std::filesystem::path(path).replace_extension("tags").string(),
|
||||||
|
config.tags);
|
||||||
}
|
}
|
||||||
|
|
||||||
// --- Data preparation --------------------------------------------------------
|
// --- Data preparation --------------------------------------------------------
|
||||||
@@ -309,11 +310,12 @@ run(std::vector<Magick::Image> &images, const Config &config,
|
|||||||
if (config.sigmoid)
|
if (config.sigmoid)
|
||||||
value = 1 / (1 + std::exp(-value));
|
value = 1 / (1 + std::exp(-value));
|
||||||
if (value > g.threshold) {
|
if (value > g.threshold) {
|
||||||
printf("%s\t%.2f\t%s\n", images.at(i).fileName().c_str(),
|
printf("%s\t%s\t%.2f\n", images.at(i).fileName().c_str(),
|
||||||
value, config.tags.at(t).c_str());
|
config.tags.at(t).c_str(), value);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
fflush(stdout);
|
||||||
}
|
}
|
||||||
|
|
||||||
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||||
@@ -720,7 +722,7 @@ main(int argc, char *argv[])
|
|||||||
// Load batched images in parallel (the first is for GM, the other for IM).
|
// Load batched images in parallel (the first is for GM, the other for IM).
|
||||||
if (g.batch > 1) {
|
if (g.batch > 1) {
|
||||||
auto value = std::to_string(
|
auto value = std::to_string(
|
||||||
std::max(std::thread::hardware_concurrency() / g.batch, 1L));
|
std::max(long(std::thread::hardware_concurrency()) / g.batch, 1L));
|
||||||
setenv("OMP_NUM_THREADS", value.c_str(), true);
|
setenv("OMP_NUM_THREADS", value.c_str(), true);
|
||||||
setenv("MAGICK_THREAD_LIMIT", value.c_str(), true);
|
setenv("MAGICK_THREAD_LIMIT", value.c_str(), true);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -115,7 +115,7 @@ wd14() {
|
|||||||
|
|
||||||
# These models are an undocumented mess, thus using ONNX preconversions.
|
# These models are an undocumented mess, thus using ONNX preconversions.
|
||||||
mldanbooru() {
|
mldanbooru() {
|
||||||
local name=$1 basename=$2
|
local name=$1 size=$2 basename=$3
|
||||||
status "$name"
|
status "$name"
|
||||||
|
|
||||||
if ! [ -d ml-danbooru-onnx ]
|
if ! [ -d ml-danbooru-onnx ]
|
||||||
@@ -138,7 +138,7 @@ mldanbooru() {
|
|||||||
channels=rgb
|
channels=rgb
|
||||||
normalize=true
|
normalize=true
|
||||||
pad=stretch
|
pad=stretch
|
||||||
size=640
|
size=$size
|
||||||
interpret=sigmoid
|
interpret=sigmoid
|
||||||
END
|
END
|
||||||
}
|
}
|
||||||
@@ -157,5 +157,7 @@ wd14 'WD v1.4 SwinV2 v2' 'SmilingWolf/wd-v1-4-swinv2-tagger-v2'
|
|||||||
wd14 'WD v1.4 MOAT v2' 'SmilingWolf/wd-v1-4-moat-tagger-v2'
|
wd14 'WD v1.4 MOAT v2' 'SmilingWolf/wd-v1-4-moat-tagger-v2'
|
||||||
|
|
||||||
# As suggested by author https://github.com/IrisRainbowNeko/ML-Danbooru-webui
|
# As suggested by author https://github.com/IrisRainbowNeko/ML-Danbooru-webui
|
||||||
mldanbooru 'ML-Danbooru Caformer dec-5-97527' 'ml_caformer_m36_dec-5-97527.onnx'
|
mldanbooru 'ML-Danbooru CAFormer dec-5-97527' \
|
||||||
mldanbooru 'ML-Danbooru TResNet-D 6-30000' 'TResnet-D-FLq_ema_6-30000.onnx'
|
448 'ml_caformer_m36_dec-5-97527.onnx'
|
||||||
|
mldanbooru 'ML-Danbooru TResNet-D 6-30000' \
|
||||||
|
640 'TResnet-D-FLq_ema_6-30000.onnx'
|
||||||
|
|||||||
@@ -23,6 +23,7 @@ CREATE TABLE IF NOT EXISTS node(
|
|||||||
) STRICT;
|
) STRICT;
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS node__sha1 ON node(sha1);
|
CREATE INDEX IF NOT EXISTS node__sha1 ON node(sha1);
|
||||||
|
CREATE INDEX IF NOT EXISTS node__parent ON node(parent);
|
||||||
CREATE UNIQUE INDEX IF NOT EXISTS node__parent_name
|
CREATE UNIQUE INDEX IF NOT EXISTS node__parent_name
|
||||||
ON node(IFNULL(parent, 0), name);
|
ON node(IFNULL(parent, 0), name);
|
||||||
|
|
||||||
@@ -76,7 +77,7 @@ CREATE TABLE IF NOT EXISTS tag_space(
|
|||||||
id INTEGER NOT NULL,
|
id INTEGER NOT NULL,
|
||||||
name TEXT NOT NULL,
|
name TEXT NOT NULL,
|
||||||
description TEXT,
|
description TEXT,
|
||||||
CHECK (name NOT LIKE '%:%'),
|
CHECK (name NOT LIKE '%:%' AND name NOT LIKE '-%'),
|
||||||
PRIMARY KEY (id)
|
PRIMARY KEY (id)
|
||||||
) STRICT;
|
) STRICT;
|
||||||
|
|
||||||
|
|||||||
579
main.go
579
main.go
@@ -41,6 +41,9 @@ import (
|
|||||||
"golang.org/x/image/webp"
|
"golang.org/x/image/webp"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// #include <unistd.h>
|
||||||
|
import "C"
|
||||||
|
|
||||||
var (
|
var (
|
||||||
db *sql.DB // sqlite database
|
db *sql.DB // sqlite database
|
||||||
galleryDirectory string // gallery directory
|
galleryDirectory string // gallery directory
|
||||||
@@ -59,19 +62,47 @@ func hammingDistance(a, b int64) int {
|
|||||||
return bits.OnesCount64(uint64(a) ^ uint64(b))
|
return bits.OnesCount64(uint64(a) ^ uint64(b))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type productAggregator float64
|
||||||
|
|
||||||
|
func (pa *productAggregator) Step(v float64) {
|
||||||
|
*pa = productAggregator(float64(*pa) * v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pa *productAggregator) Done() float64 {
|
||||||
|
return float64(*pa)
|
||||||
|
}
|
||||||
|
|
||||||
|
func newProductAggregator() *productAggregator {
|
||||||
|
pa := productAggregator(1)
|
||||||
|
return &pa
|
||||||
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
sql.Register("sqlite3_custom", &sqlite3.SQLiteDriver{
|
sql.Register("sqlite3_custom", &sqlite3.SQLiteDriver{
|
||||||
ConnectHook: func(conn *sqlite3.SQLiteConn) error {
|
ConnectHook: func(conn *sqlite3.SQLiteConn) error {
|
||||||
return conn.RegisterFunc("hamming", hammingDistance, true /*pure*/)
|
if err := conn.RegisterFunc(
|
||||||
|
"hamming", hammingDistance, true /*pure*/); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := conn.RegisterAggregator(
|
||||||
|
"product", newProductAggregator, true /*pure*/); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func openDB(directory string) error {
|
func openDB(directory string) error {
|
||||||
|
galleryDirectory = directory
|
||||||
|
|
||||||
var err error
|
var err error
|
||||||
db, err = sql.Open("sqlite3_custom", "file:"+filepath.Join(directory,
|
db, err = sql.Open("sqlite3_custom", "file:"+filepath.Join(directory,
|
||||||
nameOfDB+"?_foreign_keys=1&_busy_timeout=1000"))
|
nameOfDB+"?_foreign_keys=1&_busy_timeout=1000"))
|
||||||
galleryDirectory = directory
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_, err = db.Exec(initializeSQL)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -270,11 +301,10 @@ func cmdInit(fs *flag.FlagSet, args []string) error {
|
|||||||
if fs.NArg() != 1 {
|
if fs.NArg() != 1 {
|
||||||
return errWrongUsage
|
return errWrongUsage
|
||||||
}
|
}
|
||||||
if err := openDB(fs.Arg(0)); err != nil {
|
if err := os.MkdirAll(fs.Arg(0), 0755); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if err := openDB(fs.Arg(0)); err != nil {
|
||||||
if _, err := db.Exec(initializeSQL); err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -291,49 +321,7 @@ func cmdInit(fs *flag.FlagSet, args []string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// --- Web ---------------------------------------------------------------------
|
// --- API: Browse -------------------------------------------------------------
|
||||||
|
|
||||||
var hashRE = regexp.MustCompile(`^/.*?/([0-9a-f]{40})$`)
|
|
||||||
var staticHandler http.Handler
|
|
||||||
|
|
||||||
var page = template.Must(template.New("/").Parse(`<!DOCTYPE html><html><head>
|
|
||||||
<title>Gallery</title>
|
|
||||||
<meta charset="utf-8" />
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
|
||||||
<link rel=stylesheet href=style.css>
|
|
||||||
</head><body>
|
|
||||||
<noscript>This is a web application, and requires Javascript.</noscript>
|
|
||||||
<script src=mithril.js></script>
|
|
||||||
<script src=gallery.js></script>
|
|
||||||
</body></html>`))
|
|
||||||
|
|
||||||
func handleRequest(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.URL.Path != "/" {
|
|
||||||
staticHandler.ServeHTTP(w, r)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err := page.Execute(w, nil); err != nil {
|
|
||||||
log.Println(err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func handleImages(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if m := hashRE.FindStringSubmatch(r.URL.Path); m == nil {
|
|
||||||
http.NotFound(w, r)
|
|
||||||
} else {
|
|
||||||
http.ServeFile(w, r, imagePath(m[1]))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func handleThumbs(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if m := hashRE.FindStringSubmatch(r.URL.Path); m == nil {
|
|
||||||
http.NotFound(w, r)
|
|
||||||
} else {
|
|
||||||
http.ServeFile(w, r, thumbPath(m[1]))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
|
||||||
|
|
||||||
func getSubdirectories(tx *sql.Tx, parent int64) (names []string, err error) {
|
func getSubdirectories(tx *sql.Tx, parent int64) (names []string, err error) {
|
||||||
return dbCollectStrings(`SELECT name FROM node
|
return dbCollectStrings(`SELECT name FROM node
|
||||||
@@ -413,7 +401,7 @@ func handleAPIBrowse(w http.ResponseWriter, r *http.Request) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
// --- API: Tags ---------------------------------------------------------------
|
||||||
|
|
||||||
type webTagNamespace struct {
|
type webTagNamespace struct {
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
@@ -499,7 +487,7 @@ func handleAPITags(w http.ResponseWriter, r *http.Request) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
// --- API: Duplicates ---------------------------------------------------------
|
||||||
|
|
||||||
type webDuplicateImage struct {
|
type webDuplicateImage struct {
|
||||||
SHA1 string `json:"sha1"`
|
SHA1 string `json:"sha1"`
|
||||||
@@ -642,7 +630,7 @@ func handleAPIDuplicates(w http.ResponseWriter, r *http.Request) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
// --- API: Orphans ------------------------------------------------------------
|
||||||
|
|
||||||
type webOrphanImage struct {
|
type webOrphanImage struct {
|
||||||
SHA1 string `json:"sha1"`
|
SHA1 string `json:"sha1"`
|
||||||
@@ -670,7 +658,9 @@ func getOrphanReplacement(webPath string) (*webOrphanImage, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
parent, err := idForDirectoryPath(tx, path[:len(path)-1], false)
|
parent, err := idForDirectoryPath(tx, path[:len(path)-1], false)
|
||||||
if err != nil {
|
if errors.Is(err, sql.ErrNoRows) {
|
||||||
|
return nil, nil
|
||||||
|
} else if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -697,7 +687,8 @@ func getOrphans() (result []webOrphan, err error) {
|
|||||||
FROM orphan AS o
|
FROM orphan AS o
|
||||||
JOIN image AS i ON o.sha1 = i.sha1
|
JOIN image AS i ON o.sha1 = i.sha1
|
||||||
LEFT JOIN tag_assignment AS ta ON o.sha1 = ta.sha1
|
LEFT JOIN tag_assignment AS ta ON o.sha1 = ta.sha1
|
||||||
GROUP BY o.sha1`)
|
GROUP BY o.sha1
|
||||||
|
ORDER BY path`)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -739,7 +730,7 @@ func handleAPIOrphans(w http.ResponseWriter, r *http.Request) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
// --- API: Image view ---------------------------------------------------------
|
||||||
|
|
||||||
func getImageDimensions(sha1 string) (w int64, h int64, err error) {
|
func getImageDimensions(sha1 string) (w int64, h int64, err error) {
|
||||||
err = db.QueryRow(`SELECT width, height FROM image WHERE sha1 = ?`,
|
err = db.QueryRow(`SELECT width, height FROM image WHERE sha1 = ?`,
|
||||||
@@ -842,7 +833,7 @@ func handleAPIInfo(w http.ResponseWriter, r *http.Request) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
// --- API: Image similar ------------------------------------------------------
|
||||||
|
|
||||||
type webSimilarImage struct {
|
type webSimilarImage struct {
|
||||||
SHA1 string `json:"sha1"`
|
SHA1 string `json:"sha1"`
|
||||||
@@ -854,15 +845,17 @@ type webSimilarImage struct {
|
|||||||
|
|
||||||
func getSimilar(sha1 string, dhash int64, pixels int64, distance int) (
|
func getSimilar(sha1 string, dhash int64, pixels int64, distance int) (
|
||||||
result []webSimilarImage, err error) {
|
result []webSimilarImage, err error) {
|
||||||
// For distance ∈ {0, 1}, this query is quite inefficient.
|
// If there's a dhash, there should also be thumbnail dimensions.
|
||||||
// In exchange, it's generic.
|
var rows *sql.Rows
|
||||||
//
|
common := `SELECT sha1, width * height, IFNULL(thumbw, 0), IFNULL(thumbh, 0)
|
||||||
// If there's a dhash, there should also be thumbnail dimensions,
|
FROM image WHERE sha1 <> ? AND `
|
||||||
// so not bothering with IFNULL on them.
|
if distance == 0 {
|
||||||
rows, err := db.Query(`
|
rows, err = db.Query(common+`dhash = ?`, sha1, dhash)
|
||||||
SELECT sha1, width * height, IFNULL(thumbw, 0), IFNULL(thumbh, 0)
|
} else {
|
||||||
FROM image WHERE sha1 <> ? AND dhash IS NOT NULL
|
// This is generic, but quite inefficient for distance ∈ {0, 1}.
|
||||||
AND hamming(dhash, ?) = ?`, sha1, dhash, distance)
|
rows, err = db.Query(common+`dhash IS NOT NULL
|
||||||
|
AND hamming(dhash, ?) = ?`, sha1, dhash, distance)
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -952,35 +945,90 @@ func handleAPISimilar(w http.ResponseWriter, r *http.Request) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
// --- API: Search -------------------------------------------------------------
|
||||||
|
// The SQL building is the most miserable part of the whole program.
|
||||||
|
|
||||||
// NOTE: AND will mean MULTIPLY(IFNULL(ta.weight, 0)) per SHA1.
|
const searchCTE1 = `WITH
|
||||||
const searchCTE = `WITH
|
|
||||||
matches(sha1, thumbw, thumbh, score) AS (
|
matches(sha1, thumbw, thumbh, score) AS (
|
||||||
SELECT i.sha1, i.thumbw, i.thumbh, ta.weight AS score
|
SELECT i.sha1, i.thumbw, i.thumbh, ta.weight AS score
|
||||||
FROM tag_assignment AS ta
|
FROM tag_assignment AS ta
|
||||||
JOIN image AS i ON i.sha1 = ta.sha1
|
JOIN image AS i ON i.sha1 = ta.sha1
|
||||||
WHERE ta.tag = ?
|
WHERE ta.tag = %d
|
||||||
),
|
|
||||||
supertags(tag) AS (
|
|
||||||
SELECT DISTINCT ta.tag
|
|
||||||
FROM tag_assignment AS ta
|
|
||||||
JOIN matches AS m ON m.sha1 = ta.sha1
|
|
||||||
),
|
|
||||||
scoredtags(tag, score) AS (
|
|
||||||
-- The cross join is a deliberate optimization,
|
|
||||||
-- and this query may still be really slow.
|
|
||||||
SELECT st.tag, AVG(IFNULL(ta.weight, 0)) AS score
|
|
||||||
FROM matches AS m
|
|
||||||
CROSS JOIN supertags AS st
|
|
||||||
LEFT JOIN tag_assignment AS ta
|
|
||||||
ON ta.sha1 = m.sha1 AND ta.tag = st.tag
|
|
||||||
GROUP BY st.tag
|
|
||||||
-- Using the column alias doesn't fail, but it also doesn't work.
|
|
||||||
HAVING AVG(IFNULL(ta.weight, 0)) >= 0.01
|
|
||||||
)
|
)
|
||||||
`
|
`
|
||||||
|
|
||||||
|
const searchCTEMulti = `WITH
|
||||||
|
positive(tag) AS (VALUES %s),
|
||||||
|
filtered(sha1) AS (%s),
|
||||||
|
matches(sha1, thumbw, thumbh, score) AS (
|
||||||
|
SELECT i.sha1, i.thumbw, i.thumbh,
|
||||||
|
product(IFNULL(ta.weight, 0)) AS score
|
||||||
|
FROM image AS i, positive AS p
|
||||||
|
JOIN filtered AS c ON i.sha1 = c.sha1
|
||||||
|
LEFT JOIN tag_assignment AS ta ON ta.sha1 = i.sha1 AND ta.tag = p.tag
|
||||||
|
GROUP BY i.sha1
|
||||||
|
)
|
||||||
|
`
|
||||||
|
|
||||||
|
func searchQueryToCTE(tx *sql.Tx, query string) (string, error) {
|
||||||
|
positive, negative := []int64{}, []int64{}
|
||||||
|
for _, word := range strings.Split(query, " ") {
|
||||||
|
if word == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
space, tag, _ := strings.Cut(word, ":")
|
||||||
|
|
||||||
|
negated := false
|
||||||
|
if strings.HasPrefix(space, "-") {
|
||||||
|
space = space[1:]
|
||||||
|
negated = true
|
||||||
|
}
|
||||||
|
|
||||||
|
var tagID int64
|
||||||
|
err := tx.QueryRow(`
|
||||||
|
SELECT t.id FROM tag AS t
|
||||||
|
JOIN tag_space AS ts ON t.space = ts.id
|
||||||
|
WHERE ts.name = ? AND t.name = ?`, space, tag).Scan(&tagID)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
if negated {
|
||||||
|
negative = append(negative, tagID)
|
||||||
|
} else {
|
||||||
|
positive = append(positive, tagID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Don't return most of the database, and simplify the following builder.
|
||||||
|
if len(positive) == 0 {
|
||||||
|
return "", errors.New("search is too wide")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Optimise single tag searches.
|
||||||
|
if len(positive) == 1 && len(negative) == 0 {
|
||||||
|
return fmt.Sprintf(searchCTE1, positive[0]), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
values := fmt.Sprintf(`(%d)`, positive[0])
|
||||||
|
filtered := fmt.Sprintf(
|
||||||
|
`SELECT sha1 FROM tag_assignment WHERE tag = %d`, positive[0])
|
||||||
|
for _, tagID := range positive[1:] {
|
||||||
|
values += fmt.Sprintf(`, (%d)`, tagID)
|
||||||
|
filtered += fmt.Sprintf(` INTERSECT
|
||||||
|
SELECT sha1 FROM tag_assignment WHERE tag = %d`, tagID)
|
||||||
|
}
|
||||||
|
for _, tagID := range negative {
|
||||||
|
filtered += fmt.Sprintf(` EXCEPT
|
||||||
|
SELECT sha1 FROM tag_assignment WHERE tag = %d`, tagID)
|
||||||
|
}
|
||||||
|
|
||||||
|
return fmt.Sprintf(searchCTEMulti, values, filtered), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
||||||
|
|
||||||
type webTagMatch struct {
|
type webTagMatch struct {
|
||||||
SHA1 string `json:"sha1"`
|
SHA1 string `json:"sha1"`
|
||||||
ThumbW int64 `json:"thumbW"`
|
ThumbW int64 `json:"thumbW"`
|
||||||
@@ -988,10 +1036,10 @@ type webTagMatch struct {
|
|||||||
Score float32 `json:"score"`
|
Score float32 `json:"score"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func getTagMatches(tag int64) (matches []webTagMatch, err error) {
|
func getTagMatches(tx *sql.Tx, cte string) (matches []webTagMatch, err error) {
|
||||||
rows, err := db.Query(searchCTE+`
|
rows, err := tx.Query(cte + `
|
||||||
SELECT sha1, IFNULL(thumbw, 0), IFNULL(thumbh, 0), score
|
SELECT sha1, IFNULL(thumbw, 0), IFNULL(thumbh, 0), score
|
||||||
FROM matches`, tag)
|
FROM matches`)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -1009,32 +1057,78 @@ func getTagMatches(tag int64) (matches []webTagMatch, err error) {
|
|||||||
return matches, rows.Err()
|
return matches, rows.Err()
|
||||||
}
|
}
|
||||||
|
|
||||||
type webTagRelated struct {
|
type webTagSupertag struct {
|
||||||
Tag string `json:"tag"`
|
space string
|
||||||
Score float32 `json:"score"`
|
tag string
|
||||||
|
score float32
|
||||||
}
|
}
|
||||||
|
|
||||||
func getTagRelated(tag int64) (result map[string][]webTagRelated, err error) {
|
func getTagSupertags(tx *sql.Tx, cte string) (
|
||||||
rows, err := db.Query(searchCTE+`
|
result map[int64]*webTagSupertag, err error) {
|
||||||
SELECT ts.name, t.name, st.score FROM scoredtags AS st
|
rows, err := tx.Query(cte + `
|
||||||
JOIN tag AS t ON st.tag = t.id
|
SELECT DISTINCT ta.tag, ts.name, t.name
|
||||||
JOIN tag_space AS ts ON ts.id = t.space
|
FROM tag_assignment AS ta
|
||||||
ORDER BY st.score DESC`, tag)
|
JOIN matches AS m ON m.sha1 = ta.sha1
|
||||||
|
JOIN tag AS t ON ta.tag = t.id
|
||||||
|
JOIN tag_space AS ts ON ts.id = t.space`)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
defer rows.Close()
|
defer rows.Close()
|
||||||
|
|
||||||
result = make(map[string][]webTagRelated)
|
result = make(map[int64]*webTagSupertag)
|
||||||
for rows.Next() {
|
for rows.Next() {
|
||||||
var (
|
var (
|
||||||
space string
|
tag int64
|
||||||
r webTagRelated
|
st webTagSupertag
|
||||||
)
|
)
|
||||||
if err = rows.Scan(&space, &r.Tag, &r.Score); err != nil {
|
if err = rows.Scan(&tag, &st.space, &st.tag); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
result[space] = append(result[space], r)
|
result[tag] = &st
|
||||||
|
}
|
||||||
|
return result, rows.Err()
|
||||||
|
}
|
||||||
|
|
||||||
|
type webTagRelated struct {
|
||||||
|
Tag string `json:"tag"`
|
||||||
|
Score float32 `json:"score"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func getTagRelated(tx *sql.Tx, cte string, matches int) (
|
||||||
|
result map[string][]webTagRelated, err error) {
|
||||||
|
// Not sure if this level of efficiency is achievable directly in SQL.
|
||||||
|
supertags, err := getTagSupertags(tx, cte)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
rows, err := tx.Query(cte + `
|
||||||
|
SELECT ta.tag, ta.weight
|
||||||
|
FROM tag_assignment AS ta
|
||||||
|
JOIN matches AS m ON m.sha1 = ta.sha1`)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
for rows.Next() {
|
||||||
|
var (
|
||||||
|
tag int64
|
||||||
|
weight float32
|
||||||
|
)
|
||||||
|
if err = rows.Scan(&tag, &weight); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
supertags[tag].score += weight
|
||||||
|
}
|
||||||
|
|
||||||
|
result = make(map[string][]webTagRelated)
|
||||||
|
for _, info := range supertags {
|
||||||
|
if score := info.score / float32(matches); score >= 0.1 {
|
||||||
|
r := webTagRelated{Tag: info.tag, Score: score}
|
||||||
|
result[info.space] = append(result[info.space], r)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return result, rows.Err()
|
return result, rows.Err()
|
||||||
}
|
}
|
||||||
@@ -1053,13 +1147,14 @@ func handleAPISearch(w http.ResponseWriter, r *http.Request) {
|
|||||||
Related map[string][]webTagRelated `json:"related"`
|
Related map[string][]webTagRelated `json:"related"`
|
||||||
}
|
}
|
||||||
|
|
||||||
space, tag, _ := strings.Cut(params.Query, ":")
|
tx, err := db.Begin()
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer tx.Rollback()
|
||||||
|
|
||||||
var tagID int64
|
cte, err := searchQueryToCTE(tx, params.Query)
|
||||||
err := db.QueryRow(`
|
|
||||||
SELECT t.id FROM tag AS t
|
|
||||||
JOIN tag_space AS ts ON t.space = ts.id
|
|
||||||
WHERE ts.name = ? AND t.name = ?`, space, tag).Scan(&tagID)
|
|
||||||
if errors.Is(err, sql.ErrNoRows) {
|
if errors.Is(err, sql.ErrNoRows) {
|
||||||
http.Error(w, err.Error(), http.StatusNotFound)
|
http.Error(w, err.Error(), http.StatusNotFound)
|
||||||
return
|
return
|
||||||
@@ -1068,11 +1163,12 @@ func handleAPISearch(w http.ResponseWriter, r *http.Request) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if result.Matches, err = getTagMatches(tagID); err != nil {
|
if result.Matches, err = getTagMatches(tx, cte); err != nil {
|
||||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if result.Related, err = getTagRelated(tagID); err != nil {
|
if result.Related, err = getTagRelated(tx, cte,
|
||||||
|
len(result.Matches)); err != nil {
|
||||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -1082,7 +1178,47 @@ func handleAPISearch(w http.ResponseWriter, r *http.Request) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
|
// --- Web ---------------------------------------------------------------------
|
||||||
|
|
||||||
|
var hashRE = regexp.MustCompile(`^/.*?/([0-9a-f]{40})$`)
|
||||||
|
var staticHandler http.Handler
|
||||||
|
|
||||||
|
var page = template.Must(template.New("/").Parse(`<!DOCTYPE html><html><head>
|
||||||
|
<title>Gallery</title>
|
||||||
|
<meta charset="utf-8" />
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||||
|
<link rel=stylesheet href=style.css>
|
||||||
|
</head><body>
|
||||||
|
<noscript>This is a web application, and requires Javascript.</noscript>
|
||||||
|
<script src=mithril.js></script>
|
||||||
|
<script src=gallery.js></script>
|
||||||
|
</body></html>`))
|
||||||
|
|
||||||
|
func handleRequest(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path != "/" {
|
||||||
|
staticHandler.ServeHTTP(w, r)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := page.Execute(w, nil); err != nil {
|
||||||
|
log.Println(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func handleImages(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if m := hashRE.FindStringSubmatch(r.URL.Path); m == nil {
|
||||||
|
http.NotFound(w, r)
|
||||||
|
} else {
|
||||||
|
http.ServeFile(w, r, imagePath(m[1]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func handleThumbs(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if m := hashRE.FindStringSubmatch(r.URL.Path); m == nil {
|
||||||
|
http.NotFound(w, r)
|
||||||
|
} else {
|
||||||
|
http.ServeFile(w, r, thumbPath(m[1]))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// cmdWeb runs a web UI against GD on ADDRESS.
|
// cmdWeb runs a web UI against GD on ADDRESS.
|
||||||
func cmdWeb(fs *flag.FlagSet, args []string) error {
|
func cmdWeb(fs *flag.FlagSet, args []string) error {
|
||||||
@@ -1156,6 +1292,9 @@ type syncContext struct {
|
|||||||
stmtDisposeSub *sql.Stmt
|
stmtDisposeSub *sql.Stmt
|
||||||
stmtDisposeAll *sql.Stmt
|
stmtDisposeAll *sql.Stmt
|
||||||
|
|
||||||
|
// exclude specifies filesystem paths that should be seen as missing.
|
||||||
|
exclude *regexp.Regexp
|
||||||
|
|
||||||
// linked tracks which image hashes we've checked so far in the run.
|
// linked tracks which image hashes we've checked so far in the run.
|
||||||
linked map[string]struct{}
|
linked map[string]struct{}
|
||||||
}
|
}
|
||||||
@@ -1250,7 +1389,7 @@ func syncIsImage(path string) (bool, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func syncPingImage(path string) (int, int, error) {
|
func syncPingImage(path string) (int, int, error) {
|
||||||
out, err := exec.Command("magick", "identify", "-limit", "thread", "1",
|
out, err := exec.Command("identify", "-limit", "thread", "1",
|
||||||
"-ping", "-format", "%w %h", path+"[0]").Output()
|
"-ping", "-format", "%w %h", path+"[0]").Output()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, 0, err
|
return 0, 0, err
|
||||||
@@ -1415,7 +1554,11 @@ func syncPostProcess(c *syncContext, info syncFileInfo) error {
|
|||||||
case info.err != nil:
|
case info.err != nil:
|
||||||
// * → error
|
// * → error
|
||||||
if ee, ok := info.err.(*exec.ExitError); ok {
|
if ee, ok := info.err.(*exec.ExitError); ok {
|
||||||
syncPrintf(c, "%s: %s", info.fsPath, ee.Stderr)
|
message := string(ee.Stderr)
|
||||||
|
if message == "" {
|
||||||
|
message = ee.String()
|
||||||
|
}
|
||||||
|
syncPrintf(c, "%s: %s", info.fsPath, message)
|
||||||
} else {
|
} else {
|
||||||
return info.err
|
return info.err
|
||||||
}
|
}
|
||||||
@@ -1560,6 +1703,12 @@ func syncDirectory(c *syncContext, dbParent int64, fsPath string) error {
|
|||||||
fs = nil
|
fs = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if c.exclude != nil {
|
||||||
|
fs = slices.DeleteFunc(fs, func(f syncFile) bool {
|
||||||
|
return c.exclude.MatchString(filepath.Join(fsPath, f.fsName))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// Convert differences to a form more convenient for processing.
|
// Convert differences to a form more convenient for processing.
|
||||||
iDB, iFS, pairs := 0, 0, []syncPair{}
|
iDB, iFS, pairs := 0, 0, []syncPair{}
|
||||||
for iDB < len(db) && iFS < len(fs) {
|
for iDB < len(db) && iFS < len(fs) {
|
||||||
@@ -1735,9 +1884,21 @@ const disposeCTE = `WITH RECURSIVE
|
|||||||
HAVING count = total
|
HAVING count = total
|
||||||
)`
|
)`
|
||||||
|
|
||||||
|
type excludeRE struct{ re *regexp.Regexp }
|
||||||
|
|
||||||
|
func (re *excludeRE) String() string { return fmt.Sprintf("%v", re.re) }
|
||||||
|
|
||||||
|
func (re *excludeRE) Set(value string) error {
|
||||||
|
var err error
|
||||||
|
re.re, err = regexp.Compile(value)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
// cmdSync ensures the given (sub)roots are accurately reflected
|
// cmdSync ensures the given (sub)roots are accurately reflected
|
||||||
// in the database.
|
// in the database.
|
||||||
func cmdSync(fs *flag.FlagSet, args []string) error {
|
func cmdSync(fs *flag.FlagSet, args []string) error {
|
||||||
|
var exclude excludeRE
|
||||||
|
fs.Var(&exclude, "exclude", "exclude paths matching regular expression")
|
||||||
fullpaths := fs.Bool("fullpaths", false, "don't basename arguments")
|
fullpaths := fs.Bool("fullpaths", false, "don't basename arguments")
|
||||||
if err := fs.Parse(args); err != nil {
|
if err := fs.Parse(args); err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -1775,7 +1936,7 @@ func cmdSync(fs *flag.FlagSet, args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
c := syncContext{ctx: ctx, tx: tx, pb: newProgressBar(-1),
|
c := syncContext{ctx: ctx, tx: tx, pb: newProgressBar(-1),
|
||||||
linked: make(map[string]struct{})}
|
exclude: exclude.re, linked: make(map[string]struct{})}
|
||||||
defer c.pb.Stop()
|
defer c.pb.Stop()
|
||||||
|
|
||||||
if c.stmtOrphan, err = c.tx.Prepare(disposeCTE + `
|
if c.stmtOrphan, err = c.tx.Prepare(disposeCTE + `
|
||||||
@@ -1871,6 +2032,88 @@ func cmdRemove(fs *flag.FlagSet, args []string) error {
|
|||||||
return tx.Commit()
|
return tx.Commit()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --- Forgetting --------------------------------------------------------------
|
||||||
|
|
||||||
|
// cmdForget is for purging orphaned images from the database.
|
||||||
|
func cmdForget(fs *flag.FlagSet, args []string) error {
|
||||||
|
if err := fs.Parse(args); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if fs.NArg() < 2 {
|
||||||
|
return errWrongUsage
|
||||||
|
}
|
||||||
|
if err := openDB(fs.Arg(0)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
tx, err := db.Begin()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer tx.Rollback()
|
||||||
|
|
||||||
|
// Creating a temporary database seems justifiable in this case.
|
||||||
|
_, err = tx.Exec(
|
||||||
|
`CREATE TEMPORARY TABLE forgotten (sha1 TEXT PRIMARY KEY)`)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
stmt, err := tx.Prepare(`INSERT INTO forgotten (sha1) VALUES (?)`)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer stmt.Close()
|
||||||
|
for _, sha1 := range fs.Args()[1:] {
|
||||||
|
if _, err := stmt.Exec(sha1); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
rows, err := tx.Query(`DELETE FROM forgotten
|
||||||
|
WHERE sha1 IN (SELECT sha1 FROM node)
|
||||||
|
OR sha1 NOT IN (SELECT sha1 FROM image)
|
||||||
|
RETURNING sha1`)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
for rows.Next() {
|
||||||
|
var sha1 string
|
||||||
|
if err := rows.Scan(&sha1); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
log.Printf("not an orphan or not known at all: %s", sha1)
|
||||||
|
}
|
||||||
|
if _, err = tx.Exec(`
|
||||||
|
DELETE FROM tag_assignment WHERE sha1 IN (SELECT sha1 FROM forgotten);
|
||||||
|
DELETE FROM orphan WHERE sha1 IN (SELECT sha1 FROM forgotten);
|
||||||
|
DELETE FROM image WHERE sha1 IN (SELECT sha1 FROM forgotten);
|
||||||
|
`); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
rows, err = tx.Query(`SELECT sha1 FROM forgotten`)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
for rows.Next() {
|
||||||
|
var sha1 string
|
||||||
|
if err := rows.Scan(&sha1); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := os.Remove(imagePath(sha1)); err != nil &&
|
||||||
|
!os.IsNotExist(err) {
|
||||||
|
log.Printf("%s", err)
|
||||||
|
}
|
||||||
|
if err := os.Remove(thumbPath(sha1)); err != nil &&
|
||||||
|
!os.IsNotExist(err) {
|
||||||
|
log.Printf("%s", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return tx.Commit()
|
||||||
|
}
|
||||||
|
|
||||||
// --- Tagging -----------------------------------------------------------------
|
// --- Tagging -----------------------------------------------------------------
|
||||||
|
|
||||||
// cmdTag mass imports tags from data passed on stdin as a TSV
|
// cmdTag mass imports tags from data passed on stdin as a TSV
|
||||||
@@ -1993,36 +2236,54 @@ func collectFileListing(root string) (paths []string, err error) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
func checkFiles(root, suffix string, hashes []string) (bool, []string, error) {
|
func checkFiles(gc bool,
|
||||||
|
root, suffix string, hashes []string) (bool, []string, error) {
|
||||||
db := hashesToFileListing(root, suffix, hashes)
|
db := hashesToFileListing(root, suffix, hashes)
|
||||||
fs, err := collectFileListing(root)
|
fs, err := collectFileListing(root)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return false, nil, err
|
return false, nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
iDB, iFS, ok, intersection := 0, 0, true, []string{}
|
// There are two legitimate cases of FS-only database files:
|
||||||
|
// 1. There is no code to unlink images at all
|
||||||
|
// (although sync should create orphan records for everything).
|
||||||
|
// 2. thumbnail: failures may result in an unreferenced garbage image.
|
||||||
|
ok := true
|
||||||
|
onlyDB := func(path string) {
|
||||||
|
ok = false
|
||||||
|
fmt.Printf("only in DB: %s\n", path)
|
||||||
|
}
|
||||||
|
onlyFS := func(path string) {
|
||||||
|
if !gc {
|
||||||
|
ok = false
|
||||||
|
fmt.Printf("only in FS: %s\n", path)
|
||||||
|
} else if err := os.Remove(path); err != nil {
|
||||||
|
ok = false
|
||||||
|
fmt.Printf("only in FS (removing failed): %s: %s\n", path, err)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("only in FS (removing): %s\n", path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
iDB, iFS, intersection := 0, 0, []string{}
|
||||||
for iDB < len(db) && iFS < len(fs) {
|
for iDB < len(db) && iFS < len(fs) {
|
||||||
if db[iDB] == fs[iFS] {
|
if db[iDB] == fs[iFS] {
|
||||||
intersection = append(intersection, db[iDB])
|
intersection = append(intersection, db[iDB])
|
||||||
iDB++
|
iDB++
|
||||||
iFS++
|
iFS++
|
||||||
} else if db[iDB] < fs[iFS] {
|
} else if db[iDB] < fs[iFS] {
|
||||||
ok = false
|
onlyDB(db[iDB])
|
||||||
fmt.Printf("only in DB: %s\n", db[iDB])
|
|
||||||
iDB++
|
iDB++
|
||||||
} else {
|
} else {
|
||||||
ok = false
|
onlyFS(fs[iFS])
|
||||||
fmt.Printf("only in FS: %s\n", fs[iFS])
|
|
||||||
iFS++
|
iFS++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for _, path := range db[iDB:] {
|
for _, path := range db[iDB:] {
|
||||||
ok = false
|
onlyDB(path)
|
||||||
fmt.Printf("only in DB: %s\n", path)
|
|
||||||
}
|
}
|
||||||
for _, path := range fs[iFS:] {
|
for _, path := range fs[iFS:] {
|
||||||
ok = false
|
onlyFS(path)
|
||||||
fmt.Printf("only in FS: %s\n", path)
|
|
||||||
}
|
}
|
||||||
return ok, intersection, nil
|
return ok, intersection, nil
|
||||||
}
|
}
|
||||||
@@ -2070,6 +2331,7 @@ func checkHashes(paths []string) (bool, error) {
|
|||||||
// cmdCheck carries out various database consistency checks.
|
// cmdCheck carries out various database consistency checks.
|
||||||
func cmdCheck(fs *flag.FlagSet, args []string) error {
|
func cmdCheck(fs *flag.FlagSet, args []string) error {
|
||||||
full := fs.Bool("full", false, "verify image hashes")
|
full := fs.Bool("full", false, "verify image hashes")
|
||||||
|
gc := fs.Bool("gc", false, "garbage collect database files")
|
||||||
if err := fs.Parse(args); err != nil {
|
if err := fs.Parse(args); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -2106,13 +2368,13 @@ func cmdCheck(fs *flag.FlagSet, args []string) error {
|
|||||||
|
|
||||||
// This somewhat duplicates {image,thumb}Path().
|
// This somewhat duplicates {image,thumb}Path().
|
||||||
log.Println("checking SQL against filesystem")
|
log.Println("checking SQL against filesystem")
|
||||||
okImages, intersection, err := checkFiles(
|
okImages, intersection, err := checkFiles(*gc,
|
||||||
filepath.Join(galleryDirectory, nameOfImageRoot), "", allSHA1)
|
filepath.Join(galleryDirectory, nameOfImageRoot), "", allSHA1)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
okThumbs, _, err := checkFiles(
|
okThumbs, _, err := checkFiles(*gc,
|
||||||
filepath.Join(galleryDirectory, nameOfThumbRoot), ".webp", thumbSHA1)
|
filepath.Join(galleryDirectory, nameOfThumbRoot), ".webp", thumbSHA1)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -2121,11 +2383,11 @@ func cmdCheck(fs *flag.FlagSet, args []string) error {
|
|||||||
ok = false
|
ok = false
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Println("checking for dead symlinks")
|
log.Println("checking for dead symlinks (should become orphans on sync)")
|
||||||
for _, path := range intersection {
|
for _, path := range intersection {
|
||||||
if _, err := os.Stat(path); err != nil {
|
if _, err := os.Stat(path); err != nil {
|
||||||
ok = false
|
ok = false
|
||||||
fmt.Printf("%s: %s\n", path, err)
|
fmt.Printf("%s: %s\n", path, err.(*os.PathError).Unwrap())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2172,8 +2434,13 @@ func makeThumbnail(load bool, pathImage, pathThumb string) (
|
|||||||
return 0, 0, err
|
return 0, 0, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This is still too much, but it will be effective enough.
|
||||||
|
memoryLimit := strconv.FormatInt(
|
||||||
|
int64(C.sysconf(C._SC_PHYS_PAGES)*C.sysconf(C._SC_PAGE_SIZE))/
|
||||||
|
int64(len(taskSemaphore)), 10)
|
||||||
|
|
||||||
// Create a normalized thumbnail. Since we don't particularly need
|
// Create a normalized thumbnail. Since we don't particularly need
|
||||||
// any complex processing, such as surrounding of metadata,
|
// any complex processing, such as surrounding metadata,
|
||||||
// simply push it through ImageMagick.
|
// simply push it through ImageMagick.
|
||||||
//
|
//
|
||||||
// - http://www.ericbrasseur.org/gamma.html
|
// - http://www.ericbrasseur.org/gamma.html
|
||||||
@@ -2185,8 +2452,17 @@ func makeThumbnail(load bool, pathImage, pathThumb string) (
|
|||||||
//
|
//
|
||||||
// TODO: See if we can optimize resulting WebP animations.
|
// TODO: See if we can optimize resulting WebP animations.
|
||||||
// (Do -layers optimize* apply to this format at all?)
|
// (Do -layers optimize* apply to this format at all?)
|
||||||
cmd := exec.Command("magick", "-limit", "thread", "1", pathImage,
|
cmd := exec.Command("convert", "-limit", "thread", "1",
|
||||||
"-coalesce", "-colorspace", "RGB", "-auto-orient", "-strip",
|
|
||||||
|
// Do not invite the OOM killer, a particularly unpleasant guest.
|
||||||
|
"-limit", "memory", memoryLimit,
|
||||||
|
|
||||||
|
// ImageMagick creates files in /tmp, but that tends to be a tmpfs,
|
||||||
|
// which is backed by memory. The path could also be moved elsewhere:
|
||||||
|
// -define registry:temporary-path=/var/tmp
|
||||||
|
"-limit", "map", "0", "-limit", "disk", "0",
|
||||||
|
|
||||||
|
pathImage, "-coalesce", "-colorspace", "RGB", "-auto-orient", "-strip",
|
||||||
"-resize", "256x128>", "-colorspace", "sRGB",
|
"-resize", "256x128>", "-colorspace", "sRGB",
|
||||||
"-format", "%w %h", "+write", pathThumb, "-delete", "1--1", "info:")
|
"-format", "%w %h", "+write", pathThumb, "-delete", "1--1", "info:")
|
||||||
|
|
||||||
@@ -2237,7 +2513,10 @@ func cmdThumbnail(fs *flag.FlagSet, args []string) error {
|
|||||||
w, h, err := makeThumbnail(*load, pathImage, pathThumb)
|
w, h, err := makeThumbnail(*load, pathImage, pathThumb)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if ee, ok := err.(*exec.ExitError); ok {
|
if ee, ok := err.(*exec.ExitError); ok {
|
||||||
return string(ee.Stderr), nil
|
if message = string(ee.Stderr); message != "" {
|
||||||
|
return message, nil
|
||||||
|
}
|
||||||
|
return ee.String(), nil
|
||||||
}
|
}
|
||||||
return "", err
|
return "", err
|
||||||
}
|
}
|
||||||
@@ -2390,14 +2669,29 @@ func cmdDhash(fs *flag.FlagSet, args []string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
stmt, err := db.Prepare(`UPDATE image SET dhash = ? WHERE sha1 = ?`)
|
// Commits are very IO-expensive in both WAL and non-WAL SQLite,
|
||||||
|
// so write this in one go. For a middle ground, we could batch the updates.
|
||||||
|
tx, err := db.Begin()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer tx.Rollback()
|
||||||
|
|
||||||
|
// Mild hack: upgrade the transaction to a write one straight away,
|
||||||
|
// in order to rule out deadlocks (preventable failure).
|
||||||
|
if _, err := tx.Exec(`END TRANSACTION;
|
||||||
|
BEGIN IMMEDIATE TRANSACTION`); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
stmt, err := tx.Prepare(`UPDATE image SET dhash = ? WHERE sha1 = ?`)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer stmt.Close()
|
defer stmt.Close()
|
||||||
|
|
||||||
var mu sync.Mutex
|
var mu sync.Mutex
|
||||||
return parallelize(hexSHA1, func(sha1 string) (message string, err error) {
|
err = parallelize(hexSHA1, func(sha1 string) (message string, err error) {
|
||||||
hash, err := makeDhash(sha1)
|
hash, err := makeDhash(sha1)
|
||||||
if errors.Is(err, errIsAnimation) {
|
if errors.Is(err, errIsAnimation) {
|
||||||
// Ignoring this common condition.
|
// Ignoring this common condition.
|
||||||
@@ -2411,6 +2705,10 @@ func cmdDhash(fs *flag.FlagSet, args []string) error {
|
|||||||
_, err = stmt.Exec(int64(hash), sha1)
|
_, err = stmt.Exec(int64(hash), sha1)
|
||||||
return "", err
|
return "", err
|
||||||
})
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return tx.Commit()
|
||||||
}
|
}
|
||||||
|
|
||||||
// --- Main --------------------------------------------------------------------
|
// --- Main --------------------------------------------------------------------
|
||||||
@@ -2427,6 +2725,7 @@ var commands = map[string]struct {
|
|||||||
"tag": {cmdTag, "GD SPACE [DESCRIPTION]", "Import tags."},
|
"tag": {cmdTag, "GD SPACE [DESCRIPTION]", "Import tags."},
|
||||||
"sync": {cmdSync, "GD ROOT...", "Synchronise with the filesystem."},
|
"sync": {cmdSync, "GD ROOT...", "Synchronise with the filesystem."},
|
||||||
"remove": {cmdRemove, "GD PATH...", "Remove database subtrees."},
|
"remove": {cmdRemove, "GD PATH...", "Remove database subtrees."},
|
||||||
|
"forget": {cmdForget, "GD SHA1...", "Dispose of orphans."},
|
||||||
"check": {cmdCheck, "GD", "Run consistency checks."},
|
"check": {cmdCheck, "GD", "Run consistency checks."},
|
||||||
"thumbnail": {cmdThumbnail, "GD [SHA1...]", "Generate thumbnails."},
|
"thumbnail": {cmdThumbnail, "GD [SHA1...]", "Generate thumbnails."},
|
||||||
"dhash": {cmdDhash, "GD [SHA1...]", "Compute perceptual hashes."},
|
"dhash": {cmdDhash, "GD [SHA1...]", "Compute perceptual hashes."},
|
||||||
@@ -2452,6 +2751,8 @@ func usage() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
threads := flag.Int("threads", -1, "level of parallelization")
|
||||||
|
|
||||||
// This implements the -h switch for us by default.
|
// This implements the -h switch for us by default.
|
||||||
// The rest of the handling here closely follows what flag does internally.
|
// The rest of the handling here closely follows what flag does internally.
|
||||||
flag.Usage = usage
|
flag.Usage = usage
|
||||||
@@ -2477,12 +2778,20 @@ func main() {
|
|||||||
fs.PrintDefaults()
|
fs.PrintDefaults()
|
||||||
}
|
}
|
||||||
|
|
||||||
taskSemaphore = newSemaphore(runtime.NumCPU())
|
if *threads > 0 {
|
||||||
|
taskSemaphore = newSemaphore(*threads)
|
||||||
|
} else {
|
||||||
|
taskSemaphore = newSemaphore(runtime.NumCPU())
|
||||||
|
}
|
||||||
|
|
||||||
err := cmd.handler(fs, flag.Args()[1:])
|
err := cmd.handler(fs, flag.Args()[1:])
|
||||||
|
|
||||||
// Note that the database object has a closing finalizer,
|
// Note that the database object has a closing finalizer,
|
||||||
// we just additionally print any errors coming from there.
|
// we just additionally print any errors coming from there.
|
||||||
if db != nil {
|
if db != nil {
|
||||||
|
if _, err := db.Exec(`PRAGMA optimize`); err != nil {
|
||||||
|
log.Println(err)
|
||||||
|
}
|
||||||
if err := db.Close(); err != nil {
|
if err := db.Close(); err != nil {
|
||||||
log.Println(err)
|
log.Println(err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ function call(method, params) {
|
|||||||
callActive++
|
callActive++
|
||||||
return m.request({
|
return m.request({
|
||||||
method: "POST",
|
method: "POST",
|
||||||
url: `/api/${method}`,
|
url: `api/${method}`,
|
||||||
body: params,
|
body: params,
|
||||||
}).then(result => {
|
}).then(result => {
|
||||||
callActive--
|
callActive--
|
||||||
@@ -98,7 +98,7 @@ let Thumbnail = {
|
|||||||
if (!e.thumbW || !e.thumbH)
|
if (!e.thumbW || !e.thumbH)
|
||||||
return m('.thumbnail.missing', {...vnode.attrs, info: null})
|
return m('.thumbnail.missing', {...vnode.attrs, info: null})
|
||||||
return m('img.thumbnail', {...vnode.attrs, info: null,
|
return m('img.thumbnail', {...vnode.attrs, info: null,
|
||||||
src: `/thumb/${e.sha1}`, width: e.thumbW, height: e.thumbH,
|
src: `thumb/${e.sha1}`, width: e.thumbW, height: e.thumbH,
|
||||||
loading})
|
loading})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -472,13 +472,15 @@ let ViewBar = {
|
|||||||
m('ul', ViewModel.paths.map(path =>
|
m('ul', ViewModel.paths.map(path =>
|
||||||
m('li', m(ViewBarPath, {path})))),
|
m('li', m(ViewBarPath, {path})))),
|
||||||
m('h2', "Tags"),
|
m('h2', "Tags"),
|
||||||
Object.entries(ViewModel.tags).map(([space, tags]) => [
|
Object.entries(ViewModel.tags).map(([space, tags]) =>
|
||||||
m("h3", m(m.route.Link, {href: `/tags/${space}`}, space)),
|
m('details[open]', [
|
||||||
m("ul.tags", Object.entries(tags)
|
m('summary', m("h3",
|
||||||
.sort(([t1, w1], [t2, w2]) => (w2 - w1))
|
m(m.route.Link, {href: `/tags/${space}`}, space))),
|
||||||
.map(([tag, score]) =>
|
m("ul.tags", Object.entries(tags)
|
||||||
m(ScoredTag, {space, tagname: tag, score}))),
|
.sort(([t1, w1], [t2, w2]) => (w2 - w1))
|
||||||
]),
|
.map(([tag, score]) =>
|
||||||
|
m(ScoredTag, {space, tagname: tag, score}))),
|
||||||
|
])),
|
||||||
])
|
])
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -492,7 +494,7 @@ let View = {
|
|||||||
view(vnode) {
|
view(vnode) {
|
||||||
const view = m('.view', [
|
const view = m('.view', [
|
||||||
ViewModel.sha1 !== undefined
|
ViewModel.sha1 !== undefined
|
||||||
? m('img', {src: `/image/${ViewModel.sha1}`,
|
? m('img', {src: `image/${ViewModel.sha1}`,
|
||||||
width: ViewModel.width, height: ViewModel.height})
|
width: ViewModel.width, height: ViewModel.height})
|
||||||
: "No image.",
|
: "No image.",
|
||||||
])
|
])
|
||||||
@@ -609,13 +611,14 @@ let SearchRelated = {
|
|||||||
view(vnode) {
|
view(vnode) {
|
||||||
return Object.entries(SearchModel.related)
|
return Object.entries(SearchModel.related)
|
||||||
.sort((a, b) => a[0].localeCompare(b[0]))
|
.sort((a, b) => a[0].localeCompare(b[0]))
|
||||||
.map(([space, tags]) => [
|
.map(([space, tags]) => m('details[open]', [
|
||||||
m('h2', space),
|
m('summary', m('h2',
|
||||||
|
m(m.route.Link, {href: `/tags/${space}`}, space))),
|
||||||
m('ul.tags', tags
|
m('ul.tags', tags
|
||||||
.sort((a, b) => (b.score - a.score))
|
.sort((a, b) => (b.score - a.score))
|
||||||
.map(({tag, score}) =>
|
.map(({tag, score}) =>
|
||||||
m(ScoredTag, {space, tagname: tag, score}))),
|
m(ScoredTag, {space, tagname: tag, score}))),
|
||||||
])
|
]))
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -646,7 +649,11 @@ let Search = {
|
|||||||
m(Header),
|
m(Header),
|
||||||
m('.body', {}, [
|
m('.body', {}, [
|
||||||
m('.sidebar', [
|
m('.sidebar', [
|
||||||
m('p', SearchModel.query),
|
m('input', {
|
||||||
|
value: SearchModel.query,
|
||||||
|
onchange: event => m.route.set(
|
||||||
|
`/search/:key`, {key: event.target.value}),
|
||||||
|
}),
|
||||||
m(SearchRelated),
|
m(SearchRelated),
|
||||||
]),
|
]),
|
||||||
m(SearchView),
|
m(SearchView),
|
||||||
|
|||||||
@@ -24,11 +24,15 @@ a { color: inherit; }
|
|||||||
.header .activity { padding: .25rem .5rem; align-self: center; color: #fff; }
|
.header .activity { padding: .25rem .5rem; align-self: center; color: #fff; }
|
||||||
.header .activity.error { color: #f00; }
|
.header .activity.error { color: #f00; }
|
||||||
|
|
||||||
|
summary h2, summary h3 { display: inline-block; }
|
||||||
|
|
||||||
.sidebar { padding: .25rem .5rem; background: var(--shade-color);
|
.sidebar { padding: .25rem .5rem; background: var(--shade-color);
|
||||||
border-right: 1px solid #ccc; overflow: auto;
|
border-right: 1px solid #ccc; overflow: auto;
|
||||||
min-width: 10rem; max-width: 20rem; flex-shrink: 0; }
|
min-width: 10rem; max-width: 20rem; flex-shrink: 0; }
|
||||||
|
.sidebar input { width: 100%; box-sizing: border-box; margin: .5rem 0;
|
||||||
|
font-size: inherit; }
|
||||||
.sidebar h2 { margin: 0.5em 0 0.25em 0; padding: 0; font-size: 1.2rem; }
|
.sidebar h2 { margin: 0.5em 0 0.25em 0; padding: 0; font-size: 1.2rem; }
|
||||||
.sidebar ul { margin: .5rem 0; padding: 0; }
|
.sidebar ul { margin: 0; padding: 0; }
|
||||||
|
|
||||||
.sidebar .path { margin: .5rem -.5rem; }
|
.sidebar .path { margin: .5rem -.5rem; }
|
||||||
.sidebar .path li { margin: 0; padding: 0; }
|
.sidebar .path li { margin: 0; padding: 0; }
|
||||||
@@ -79,7 +83,7 @@ img.thumbnail, .thumbnail.missing { box-shadow: 0 0 3px rgba(0, 0, 0, 0.75);
|
|||||||
.viewbar { padding: .25rem .5rem; background: #eee;
|
.viewbar { padding: .25rem .5rem; background: #eee;
|
||||||
border-left: 1px solid #ccc; min-width: 20rem; overflow: auto; }
|
border-left: 1px solid #ccc; min-width: 20rem; overflow: auto; }
|
||||||
.viewbar h2 { margin: 0.5em 0 0.25em 0; padding: 0; font-size: 1.2rem; }
|
.viewbar h2 { margin: 0.5em 0 0.25em 0; padding: 0; font-size: 1.2rem; }
|
||||||
.viewbar h3 { margin: 0.25em 0; padding: 0; font-size: 1.1rem; }
|
.viewbar h3 { margin: 0.5em 0 0.25em 0; padding: 0; font-size: 1.1rem; }
|
||||||
.viewbar ul { margin: 0; padding: 0 0 0 1.25em; list-style-type: "- "; }
|
.viewbar ul { margin: 0; padding: 0 0 0 1.25em; list-style-type: "- "; }
|
||||||
.viewbar ul.tags { padding: 0; list-style-type: none; }
|
.viewbar ul.tags { padding: 0; list-style-type: none; }
|
||||||
.viewbar li { margin: 0; padding: 0; }
|
.viewbar li { margin: 0; padding: 0; }
|
||||||
|
|||||||
7
test.sh
7
test.sh
@@ -16,6 +16,9 @@ sha1duplicate=$sha1
|
|||||||
cp $input/Test/dhash.png \
|
cp $input/Test/dhash.png \
|
||||||
$input/Test/multiple-paths.png
|
$input/Test/multiple-paths.png
|
||||||
|
|
||||||
|
gen -seed 15 -size 256x256 plasma:fractal \
|
||||||
|
$input/Test/excluded.png
|
||||||
|
|
||||||
gen -seed 20 -size 160x128 plasma:fractal \
|
gen -seed 20 -size 160x128 plasma:fractal \
|
||||||
-bordercolor transparent -border 64 \
|
-bordercolor transparent -border 64 \
|
||||||
$input/Test/transparent-wide.png
|
$input/Test/transparent-wide.png
|
||||||
@@ -36,7 +39,7 @@ gen $input/Test/animation-small.gif \
|
|||||||
$input/Test/video.mp4
|
$input/Test/video.mp4
|
||||||
|
|
||||||
./gallery init $target
|
./gallery init $target
|
||||||
./gallery sync $target $input "$@"
|
./gallery sync -exclude '/excluded[.]' $target $input "$@"
|
||||||
./gallery thumbnail $target
|
./gallery thumbnail $target
|
||||||
./gallery dhash $target
|
./gallery dhash $target
|
||||||
./gallery tag $target test "Test space" <<-END
|
./gallery tag $target test "Test space" <<-END
|
||||||
@@ -47,7 +50,7 @@ END
|
|||||||
|
|
||||||
# TODO: Test all the various possible sync transitions.
|
# TODO: Test all the various possible sync transitions.
|
||||||
mv $input/Test $input/Plasma
|
mv $input/Test $input/Plasma
|
||||||
./gallery sync $target $input
|
./gallery sync -exclude '/excluded[.]' $target $input
|
||||||
|
|
||||||
./gallery web $target :8080 &
|
./gallery web $target :8080 &
|
||||||
web=$!
|
web=$!
|
||||||
|
|||||||
Reference in New Issue
Block a user