Feat:适配D1数据库

This commit is contained in:
MarSeventh
2025-08-22 18:20:10 +08:00
parent 6329091173
commit 81740fa2c2
35 changed files with 478 additions and 2309 deletions

View File

@@ -1,191 +0,0 @@
# CloudFlare ImgBed KV 到 D1 数据库迁移指南
本指南将帮助您将现有的 KV 存储数据迁移到 D1 数据库。
## 迁移前准备
### 1. 配置 D1 数据库
首先,您需要在 Cloudflare 控制台创建一个 D1 数据库:
```bash
# 创建 D1 数据库
wrangler d1 create imgbed-database
# 执行数据库初始化脚本
wrangler d1 execute imgbed-database --file=./database/init.sql
```
### 2. 更新 wrangler.toml
在您的 `wrangler.toml` 文件中添加 D1 数据库绑定:
```toml
[[d1_databases]]
binding = "DB"
database_name = "imgbed-database"
database_id = "your-database-id"
```
### 3. 备份现有数据
在迁移前,强烈建议备份您的现有数据:
1. 访问管理后台的系统设置 → 备份恢复
2. 点击"备份数据"下载完整备份文件
3. 保存备份文件到安全位置
## 迁移步骤
### 1. 检查迁移环境
访问迁移工具检查环境:
```
GET /api/manage/migrate?action=check
```
确保返回结果中 `canMigrate``true`
### 2. 查看迁移状态
查看当前数据统计:
```
GET /api/manage/migrate?action=status
```
### 3. 执行迁移
开始数据迁移:
```
GET /api/manage/migrate?action=migrate
```
迁移过程包括:
- 文件元数据迁移
- 系统设置迁移
- 索引操作迁移
## 迁移后验证
### 1. 检查数据完整性
- 访问管理后台,确认文件列表正常显示
- 测试文件上传功能
- 检查系统设置是否保持不变
### 2. 性能测试
- 测试文件访问速度
- 验证管理功能正常工作
### 3. 功能验证
- 文件上传/删除
- 备份恢复功能
- 用户认证
- API Token 管理
## 数据库结构说明
迁移后的 D1 数据库包含以下表:
### files 表
存储文件元数据,包含以下字段:
- `id`: 文件ID主键
- `value`: 文件值(用于分块文件)
- `metadata`: JSON格式的文件元数据
- 其他索引字段:`file_name`, `file_type`, `timestamp`
### settings 表
存储系统配置:
- `key`: 配置键(主键)
- `value`: 配置值
- `category`: 配置分类
### index_operations 表
存储索引操作记录:
- `id`: 操作ID主键
- `type`: 操作类型
- `timestamp`: 时间戳
- `data`: 操作数据
- `processed`: 是否已处理
### index_metadata 表
存储索引元数据:
- `key`: 元数据键
- `last_updated`: 最后更新时间
- `total_count`: 总文件数
- `last_operation_id`: 最后操作ID
### other_data 表
存储其他数据如黑名单IP等
- `key`: 数据键
- `value`: 数据值
- `type`: 数据类型
## 兼容性说明
### 向后兼容
- 系统会自动检测可用的数据库类型D1 或 KV
- 如果 D1 不可用,会自动回退到 KV 存储
- 所有现有的 API 接口保持不变
### 环境变量
- `DB`: D1 数据库绑定(新增)
- `img_url`: KV 存储绑定(保留作为备用)
## 故障排除
### 常见问题
1. **迁移失败**
- 检查 D1 数据库是否正确配置
- 确认数据库初始化脚本已执行
- 查看迁移日志中的错误信息
2. **数据不完整**
- 检查迁移结果中的错误列表
- 重新运行迁移(支持增量迁移)
- 使用备份文件恢复数据
3. **性能问题**
- D1 数据库查询比 KV 稍慢,这是正常的
- 确保使用了适当的索引
- 考虑优化查询语句
### 回滚方案
如果迁移后出现问题,可以:
1. 临时禁用 D1 绑定,系统会自动回退到 KV
2. 使用备份文件恢复到迁移前状态
3. 重新配置和测试 D1 数据库
## 性能优化建议
### 1. 索引优化
D1 数据库已预设了必要的索引,包括:
- 文件时间戳索引
- 目录索引
- 文件类型索引
### 2. 查询优化
- 使用分页查询大量数据
- 避免全表扫描
- 合理使用 WHERE 条件
### 3. 监控
- 定期检查数据库性能
- 监控查询执行时间
- 关注错误日志
## 支持
如果在迁移过程中遇到问题,请:
1. 查看浏览器控制台的错误信息
2. 检查 Cloudflare Workers 的日志
3. 确认所有配置正确
4. 参考本指南的故障排除部分
迁移完成后,您的图床将使用更强大的 D1 数据库,享受更好的查询性能和数据管理能力。

135
README.md
View File

@@ -1,8 +1,8 @@
<div align="center">
<a href="https://github.com/MarSeventh/CloudFlare-ImgBed"><img width="80%" alt="logo" src="static/readme/banner.png"/></a>
<p><em>🗂️开源文件托管解决方案,支持 Docker 和无服务器部署,支持 Telegram Bot 、 Cloudflare R2 、S3 等多种存储渠道.</em> 魔改原版将KV改为D1存储</p>
<p><em>🗂️开源文件托管解决方案,支持 Docker 和无服务器部署,支持 Telegram Bot 、 Cloudflare R2 、S3 等多种存储渠道</em></p>
<p>
<a href="https://github.com/ccxyChuzhong/CloudFlare-ImgBed-D1/blob/main/README.md">简体中文</a> | <a href="https://github.com/ccxyChuzhong/CloudFlare-ImgBed-D1/blob/main/README_en.md">English</a> | <a href="https://github.com/MarSeventh/CloudFlare-ImgBed">KV版本原版</a> | <a href="https://github.com/ccxyChuzhong/CloudFlare-ImgBed-D1">D1版本</a> | <a href="https://cfbed.sanyue.de">官方网站</a>
<a href="https://github.com/MarSeventh/CloudFlare-ImgBed/blob/main/README.md">简体中文</a> | <a href="https://github.com/MarSeventh/CloudFlare-ImgBed/blob/main/README_en.md">English</a> | <a href="https://cfbed.sanyue.de">官方网站</a>
</p>
<div>
<a href="https://github.com/MarSeventh/CloudFlare-ImgBed/blob/main/LICENSE">
@@ -78,137 +78,6 @@
</details>
# 必看!必看 !必看!
如果是使用KV存储想转D1存储。建议重新创建一个图床。使用系统的备份和恢复功能进行数据迁移
<details>
<summary>KV转D1存储详细如下</summary>
- 首先确认您的 D1 数据库已经创建:数据库名称必须为: `imgbed-database` 将数据库sql语句一段一段的全部执行
```sql
-- CloudFlare ImgBed D1 Database Initialization Script
-- 这个脚本用于初始化D1数据库
-- 删除已存在的表(如果需要重新初始化)
-- 注意:在生产环境中使用时请谨慎
-- DROP TABLE IF EXISTS files;
-- DROP TABLE IF EXISTS settings;
-- DROP TABLE IF EXISTS index_operations;
-- DROP TABLE IF EXISTS index_metadata;
-- DROP TABLE IF EXISTS other_data;
-- 执行主要的数据库架构创建
-- 这里会包含 schema.sql 的内容
-- 1. 文件表 - 存储文件元数据
CREATE TABLE IF NOT EXISTS files (
id TEXT PRIMARY KEY,
value TEXT,
metadata TEXT NOT NULL,
file_name TEXT,
file_type TEXT,
file_size TEXT,
upload_ip TEXT,
upload_address TEXT,
list_type TEXT,
timestamp INTEGER,
label TEXT,
directory TEXT,
channel TEXT,
channel_name TEXT,
tg_file_id TEXT,
tg_chat_id TEXT,
tg_bot_token TEXT,
is_chunked BOOLEAN DEFAULT FALSE,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- 2. 系统配置表
CREATE TABLE IF NOT EXISTS settings (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
category TEXT,
description TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- 3. 索引操作表
CREATE TABLE IF NOT EXISTS index_operations (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
timestamp INTEGER NOT NULL,
data TEXT NOT NULL,
processed BOOLEAN DEFAULT FALSE,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- 4. 索引元数据表
CREATE TABLE IF NOT EXISTS index_metadata (
key TEXT PRIMARY KEY,
last_updated INTEGER,
total_count INTEGER DEFAULT 0,
last_operation_id TEXT,
chunk_count INTEGER DEFAULT 0,
chunk_size INTEGER DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- 5. 其他数据表
CREATE TABLE IF NOT EXISTS other_data (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
type TEXT,
description TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- 初始化完成
```
### 在 Cloudflare Dashboard 配置 Pages 绑定
#### 步骤 A: 登录 Cloudflare Dashboard
1. 访问 https://dash.cloudflare.com
2. 登录您的账户
#### 步骤 B: 进入 Pages 项目
1. 在左侧菜单中点击 **"Pages"**
2. 找到并点击您的图床项目
#### 步骤 C: 配置 Functions 绑定
1. 在项目页面中,点击 **"Settings"** 标签
2. 在左侧菜单中点击 **"Functions"**
3. 向下滚动找到 **"D1 database bindings"** 部分
#### 步骤 D: 添加 D1 绑定
1. 点击 **"Add binding"** 按钮
2. 填写以下信息:
- **Variable name**: `DB` (必须是大写的 DB
- **D1 database**: 从下拉菜单中选择您创建的 `imgbed-database`
3. 点击 **"Save"** 按钮
#### 步骤 E: 重新部署 Pages
配置绑定后,需要重新部署:
#### 步骤 F: 验证配置
部署完成后访问以下URL验证配置
```
https://your-domain.com/api/manage/migrate?action=check
```
查看详细的配置状态
```
https://your-domain.com/api/manage/migrate?action=status
```
</details>
# 1. Introduction

View File

@@ -1,8 +1,9 @@
<div align="center">
<a href="https://github.com/MarSeventh/CloudFlare-ImgBed"><img width="80%" alt="logo" src="static/readme/banner.png"/></a>
<p><em>🗂Open-source file hosting solution, supporting Docker and serverless deployment, supporting multiple storage channels such as Telegram Bot, Cloudflare R2, S3, etc.</em> Modified version that replaces KV with D1 storage</p>
<p><em>🗂Open-source file hosting solution, supporting Docker and serverless deployment, supporting multiple storage channels such as Telegram Bot, Cloudflare R2, S3, etc.</em></p>
<p>
<a href="https://github.com/ccxyChuzhong/CloudFlare-ImgBed-D1/blob/main/README.md">简体中文</a> | <a href="https://github.com/ccxyChuzhong/CloudFlare-ImgBed-D1/blob/main/README_en.md">English</a> | <a href="https://github.com/MarSeventh/CloudFlare-ImgBed">KV Version (Original)</a> | <a href="https://github.com/ccxyChuzhong/CloudFlare-ImgBed-D1">D1 Version</a> | <a href="https://cfbed.sanyue.de/en">Official Website</a>
<a href="https://github.com/MarSeventh/CloudFlare-ImgBed/blob/main/README.md">简体中文</a> | <a href="https://github.com/MarSeventh/CloudFlare-ImgBed/blob/main/README_en.md">English</a> | <a
href="https://cfbed.sanyue.de/en">Official Website</a>
</p>
<div>
<a href="https://github.com/MarSeventh/CloudFlare-ImgBed/blob/main/LICENSE">
@@ -67,145 +68,6 @@
</details>
# Important! Important! Important!
If you are using KV storage and want to migrate to D1 storage, it is recommended to create a new image hosting service. Use the system's backup and restore functions for data migration!!!!
<details>
<summary>Detailed KV to D1 Storage Migration Guide</summary>
- First, confirm that your D1 database has been created: The database name must be: `imgbed-database`. Execute all SQL statements section by section:
```sql
-- CloudFlare ImgBed D1 Database Initialization Script
-- This script is used to initialize the D1 database
-- Drop existing tables (if re-initialization is needed)
-- Note: Use with caution in production environment
-- DROP TABLE IF EXISTS files;
-- DROP TABLE IF EXISTS settings;
-- DROP TABLE IF EXISTS index_operations;
-- DROP TABLE IF EXISTS index_metadata;
-- DROP TABLE IF EXISTS other_data;
-- Execute main database schema creation
-- This will include the content of schema.sql
-- 1. Files table - stores file metadata
CREATE TABLE IF NOT EXISTS files (
id TEXT PRIMARY KEY,
value TEXT,
metadata TEXT NOT NULL,
file_name TEXT,
file_type TEXT,
file_size TEXT,
upload_ip TEXT,
upload_address TEXT,
list_type TEXT,
timestamp INTEGER,
label TEXT,
directory TEXT,
channel TEXT,
channel_name TEXT,
tg_file_id TEXT,
tg_chat_id TEXT,
tg_bot_token TEXT,
is_chunked BOOLEAN DEFAULT FALSE,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- 2. System configuration table
CREATE TABLE IF NOT EXISTS settings (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
category TEXT,
description TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- 3. Index operations table
CREATE TABLE IF NOT EXISTS index_operations (
id TEXT PRIMARY KEY,
type TEXT NOT NULL,
timestamp INTEGER NOT NULL,
data TEXT NOT NULL,
processed BOOLEAN DEFAULT FALSE,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- 4. Index metadata table
CREATE TABLE IF NOT EXISTS index_metadata (
key TEXT PRIMARY KEY,
last_updated INTEGER,
total_count INTEGER DEFAULT 0,
last_operation_id TEXT,
chunk_count INTEGER DEFAULT 0,
chunk_size INTEGER DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- 5. Other data table
CREATE TABLE IF NOT EXISTS other_data (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
type TEXT,
description TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Insert initial index metadata
INSERT OR REPLACE INTO index_metadata (key, last_updated, total_count, last_operation_id)
VALUES ('main_index', 0, 0, NULL);
-- Initialization complete
-- Database is ready, data migration can begin
```
### Configure Pages Bindings in Cloudflare Dashboard
#### Step A: Login to Cloudflare Dashboard
1. Visit https://dash.cloudflare.com
2. Login to your account
#### Step B: Enter Pages Project
1. Click **"Pages"** in the left menu
2. Find and click your image hosting project
#### Step C: Configure Functions Bindings
1. Click the **"Settings"** tab on the project page
2. Click **"Functions"** in the left menu
3. Scroll down to find the **"D1 database bindings"** section
#### Step D: Add D1 Binding
1. Click the **"Add binding"** button
2. Fill in the following information:
- **Variable name**: `DB` (must be uppercase DB)
- **D1 database**: Select your created `imgbed-database` from the dropdown
3. Click the **"Save"** button
#### Step E: Redeploy Pages
After configuring bindings, you need to redeploy:
#### Step F: Verify Configuration
After deployment is complete, visit the following URL to verify configuration:
```
https://your-domain.com/api/manage/migrate?action=check
```
View detailed configuration status:
```
https://your-domain.com/api/manage/migrate?action=status
```
</details>
# 1. Introduction

View File

@@ -1,37 +0,0 @@
import { errorHandling, telemetryData, checkDatabaseConfig } from './utils/middleware';
// 安全的中间件链,带错误处理
export async function onRequest(context) {
try {
// 检查数据库配置
var dbCheckResult = await checkDatabaseConfig(context);
if (dbCheckResult instanceof Response) {
return dbCheckResult;
}
// 错误处理中间件
var errorResult = await errorHandling(context);
if (errorResult instanceof Response) {
return errorResult;
}
// 遥测数据中间件
var telemetryResult = await telemetryData(context);
if (telemetryResult instanceof Response) {
return telemetryResult;
}
return await context.next();
} catch (error) {
console.error('Middleware chain error:', error);
return new Response(JSON.stringify({
error: 'Middleware error: ' + error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,3 +1,3 @@
import { checkKVConfig } from '../utils/middleware';
import { checkDatabaseConfig } from '../utils/middleware';
export const onRequest = [checkKVConfig];
export const onRequest = [checkDatabaseConfig];

View File

@@ -1,114 +0,0 @@
/**
* 备份功能测试工具
*/
import { getDatabase } from '../../utils/databaseAdapter.js';
export async function onRequest(context) {
var env = context.env;
try {
var db = getDatabase(env);
var results = {
databaseType: db.constructor.name || 'Unknown',
settings: {
all: [],
manage: [],
sysConfig: []
},
files: {
count: 0,
sample: []
}
};
// 测试列出所有设置
try {
var allSettings = await db.listSettings({});
results.settings.all = allSettings.keys.map(function(item) {
return {
key: item.name,
hasValue: !!item.value,
valueLength: item.value ? item.value.length : 0
};
});
} catch (error) {
results.settings.allError = error.message;
}
// 测试列出manage@开头的设置
try {
var manageSettings = await db.listSettings({ prefix: 'manage@' });
results.settings.manage = manageSettings.keys.map(function(item) {
return {
key: item.name,
hasValue: !!item.value,
valueLength: item.value ? item.value.length : 0
};
});
} catch (error) {
results.settings.manageError = error.message;
}
// 测试列出sysConfig设置
try {
var sysConfigSettings = await db.listSettings({ prefix: 'manage@sysConfig@' });
results.settings.sysConfig = sysConfigSettings.keys.map(function(item) {
return {
key: item.name,
hasValue: !!item.value,
valueLength: item.value ? item.value.length : 0,
valuePreview: item.value ? item.value.substring(0, 100) + '...' : null
};
});
} catch (error) {
results.settings.sysConfigError = error.message;
}
// 测试列出文件
try {
var filesList = await db.listFiles({ limit: 5 });
results.files.count = filesList.keys.length;
results.files.sample = filesList.keys.map(function(item) {
return {
id: item.name,
hasMetadata: !!item.metadata
};
});
} catch (error) {
results.files.error = error.message;
}
// 测试特定设置的读取
try {
var pageConfig = await db.get('manage@sysConfig@page');
results.specificTests = {
pageConfig: {
exists: !!pageConfig,
length: pageConfig ? pageConfig.length : 0,
preview: pageConfig ? pageConfig.substring(0, 200) + '...' : null
}
};
} catch (error) {
results.specificTests = { error: error.message };
}
return new Response(JSON.stringify(results, null, 2), {
headers: {
'Content-Type': 'application/json'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,112 +0,0 @@
/**
* D1数据库查询测试工具
*/
import { getDatabase } from '../../utils/databaseAdapter.js';
export async function onRequest(context) {
var env = context.env;
var url = new URL(context.request.url);
var query = url.searchParams.get('query') || 'files';
try {
var db = getDatabase(env);
var results = {
databaseType: db.constructor.name,
query: query,
results: null,
error: null
};
if (query === 'files') {
// 直接查询files表
try {
var stmt = db.db.prepare('SELECT COUNT(*) as count FROM files');
var countResult = await stmt.first();
results.totalFiles = countResult.count;
// 查询前5条记录
var stmt2 = db.db.prepare('SELECT id, metadata, created_at FROM files ORDER BY created_at DESC LIMIT 5');
var fileResults = await stmt2.all();
// 检查结果格式
console.log('fileResults type:', typeof fileResults);
console.log('fileResults:', fileResults);
if (Array.isArray(fileResults)) {
results.sampleFiles = fileResults.map(function(row) {
return {
id: row.id,
metadata: JSON.parse(row.metadata || '{}'),
created_at: row.created_at
};
});
} else {
results.sampleFiles = [];
results.fileResultsType = typeof fileResults;
results.fileResultsValue = fileResults;
}
} catch (error) {
results.error = 'Direct query failed: ' + error.message;
}
} else if (query === 'list') {
// 测试listFiles方法
try {
var listResult = await db.listFiles({ limit: 5 });
results.listResult = listResult;
} catch (error) {
results.error = 'listFiles failed: ' + error.message;
}
} else if (query === 'listall') {
// 测试通用list方法
try {
var listAllResult = await db.list({ limit: 5 });
results.listAllResult = listAllResult;
} catch (error) {
results.error = 'list failed: ' + error.message;
}
} else if (query === 'prefix') {
// 测试带前缀的查询
var prefix = url.searchParams.get('prefix') || 'cosplay/';
try {
var prefixResult = await db.list({ prefix: prefix, limit: 10 });
results.prefixResult = prefixResult;
results.prefix = prefix;
} catch (error) {
results.error = 'prefix query failed: ' + error.message;
}
} else if (query === 'settings') {
// 查询设置表
try {
var stmt = db.db.prepare('SELECT COUNT(*) as count FROM settings');
var countResult = await stmt.first();
results.totalSettings = countResult.count;
var stmt2 = db.db.prepare('SELECT key, value FROM settings LIMIT 5');
var settingResults = await stmt2.all();
results.sampleSettings = settingResults;
} catch (error) {
results.error = 'Settings query failed: ' + error.message;
}
} else {
results.error = 'Unknown query type. Use: files, list, listall, prefix, settings';
}
return new Response(JSON.stringify(results, null, 2), {
headers: {
'Content-Type': 'application/json'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,89 +0,0 @@
/**
* 简单的D1测试
*/
export async function onRequest(context) {
var env = context.env;
try {
var results = {
hasDB: !!env.DB,
dbType: env.DB ? typeof env.DB : 'undefined'
};
if (!env.DB) {
return new Response(JSON.stringify(results), {
headers: { 'Content-Type': 'application/json' }
});
}
// 测试简单查询
try {
var stmt = env.DB.prepare('SELECT COUNT(*) as count FROM files');
var countResult = await stmt.first();
results.countQuery = {
success: true,
count: countResult.count,
resultType: typeof countResult
};
} catch (error) {
results.countQuery = {
success: false,
error: error.message
};
}
// 测试all()查询
try {
var stmt2 = env.DB.prepare('SELECT id FROM files LIMIT 3');
var allResult = await stmt2.all();
results.allQuery = {
success: true,
resultType: typeof allResult,
isArray: Array.isArray(allResult),
length: allResult ? allResult.length : 'N/A',
sample: allResult
};
} catch (error) {
results.allQuery = {
success: false,
error: error.message
};
}
// 测试带参数的查询
try {
var stmt3 = env.DB.prepare('SELECT id FROM files WHERE id LIKE ? LIMIT 2');
var paramResult = await stmt3.bind('cosplay/%').all();
results.paramQuery = {
success: true,
resultType: typeof paramResult,
isArray: Array.isArray(paramResult),
length: paramResult ? paramResult.length : 'N/A',
sample: paramResult
};
} catch (error) {
results.paramQuery = {
success: false,
error: error.message
};
}
return new Response(JSON.stringify(results, null, 2), {
headers: {
'Content-Type': 'application/json'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,109 +0,0 @@
/**
* 数据库功能测试工具
*/
import { getDatabase } from '../../utils/databaseAdapter.js';
export async function onRequest(context) {
var env = context.env;
try {
var results = {
databaseType: null,
listTest: null,
getTest: null,
putTest: null,
errors: []
};
// 测试数据库连接
try {
var db = getDatabase(env);
results.databaseType = db.constructor.name || 'Unknown';
} catch (error) {
results.errors.push('Database connection failed: ' + error.message);
return new Response(JSON.stringify(results, null, 2), {
status: 500,
headers: { 'Content-Type': 'application/json' }
});
}
// 测试list方法
try {
var listResult = await db.list({ limit: 5 });
results.listTest = {
success: true,
hasKeys: !!(listResult && listResult.keys),
isArray: Array.isArray(listResult.keys),
keyCount: listResult.keys ? listResult.keys.length : 0,
structure: listResult ? Object.keys(listResult) : []
};
} catch (error) {
results.listTest = {
success: false,
error: error.message
};
results.errors.push('List test failed: ' + error.message);
}
// 测试get方法
try {
var getResult = await db.get('test_key_that_does_not_exist');
results.getTest = {
success: true,
result: getResult,
isNull: getResult === null
};
} catch (error) {
results.getTest = {
success: false,
error: error.message
};
results.errors.push('Get test failed: ' + error.message);
}
// 测试put方法使用临时键
try {
var testKey = 'test_' + Date.now();
var testValue = 'test_value_' + Date.now();
await db.put(testKey, testValue);
// 立即读取验证
var retrievedValue = await db.get(testKey);
// 清理测试数据
await db.delete(testKey);
results.putTest = {
success: true,
valueMatch: retrievedValue === testValue,
testKey: testKey,
testValue: testValue,
retrievedValue: retrievedValue
};
} catch (error) {
results.putTest = {
success: false,
error: error.message
};
results.errors.push('Put test failed: ' + error.message);
}
return new Response(JSON.stringify(results, null, 2), {
headers: {
'Content-Type': 'application/json'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,121 +0,0 @@
/**
* 环境变量调试工具
* 用于检查 D1 和 KV 绑定状态
*/
import { getDatabase } from '../../utils/databaseAdapter.js';
export async function onRequest(context) {
const { env } = context;
try {
// 检查环境变量
const envInfo = {
hasDB: !!env.DB,
hasImgUrl: !!env.img_url,
dbType: env.DB ? typeof env.DB : 'undefined',
imgUrlType: env.img_url ? typeof env.img_url : 'undefined',
dbPrepare: env.DB && typeof env.DB.prepare === 'function',
imgUrlGet: env.img_url && typeof env.img_url.get === 'function'
};
// 尝试测试 D1 连接
let d1Test = null;
if (env.DB) {
try {
const stmt = env.DB.prepare('SELECT 1 as test');
const result = await stmt.first();
d1Test = { success: true, result: result };
} catch (error) {
d1Test = { success: false, error: error.message };
}
}
// 尝试测试 KV 连接
let kvTest = null;
if (env.img_url) {
try {
const result = await getDatabase(env).list({ limit: 1 });
kvTest = { success: true, hasKeys: result.keys.length > 0 };
} catch (error) {
kvTest = { success: false, error: error.message };
}
}
const debugInfo = {
timestamp: new Date().toISOString(),
environment: envInfo,
d1Test: d1Test,
kvTest: kvTest,
recommendation: getRecommendation(envInfo, d1Test, kvTest)
};
return new Response(JSON.stringify(debugInfo, null, 2), {
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: 'Debug failed',
message: error.message,
stack: error.stack
}, null, 2), {
status: 500,
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
}
});
}
}
function getRecommendation(envInfo, d1Test, kvTest) {
const recommendations = [];
if (!envInfo.hasDB && !envInfo.hasImgUrl) {
recommendations.push('❌ 没有配置任何数据库绑定');
recommendations.push('🔧 请在 Cloudflare Pages Dashboard 中配置 D1 或 KV 绑定');
}
if (envInfo.hasDB) {
if (!envInfo.dbPrepare) {
recommendations.push('⚠️ D1 绑定存在但 prepare 方法不可用');
recommendations.push('🔧 请检查 D1 数据库是否正确绑定');
} else if (d1Test && !d1Test.success) {
recommendations.push('❌ D1 数据库连接失败: ' + d1Test.error);
recommendations.push('🔧 请检查数据库是否已初始化表结构');
recommendations.push('💡 运行: npx wrangler d1 execute imgbed-database --file=./database/init.sql');
} else if (d1Test && d1Test.success) {
recommendations.push('✅ D1 数据库连接正常');
}
} else {
recommendations.push(' 没有检测到 D1 绑定 (env.DB)');
recommendations.push('🔧 在 Pages Settings → Functions → D1 database bindings 中添加:');
recommendations.push(' Variable name: DB');
recommendations.push(' D1 database: imgbed-database');
}
if (envInfo.hasImgUrl) {
if (!envInfo.imgUrlGet) {
recommendations.push('⚠️ KV 绑定存在但 get 方法不可用');
} else if (kvTest && !kvTest.success) {
recommendations.push('❌ KV 连接失败: ' + kvTest.error);
} else if (kvTest && kvTest.success) {
recommendations.push('✅ KV 存储连接正常');
}
} else {
recommendations.push(' 没有检测到 KV 绑定 (env.img_url)');
}
if (!envInfo.hasDB && !envInfo.hasImgUrl) {
recommendations.push('');
recommendations.push('🚀 快速解决方案:');
recommendations.push('1. 重新部署项目 (配置可能还没生效)');
recommendations.push('2. 等待 2-3 分钟让绑定生效');
recommendations.push('3. 检查 Pages 项目的 Functions 设置');
}
return recommendations;
}

View File

@@ -1,27 +0,0 @@
/**
* 最简单的测试页面
*/
export async function onRequest(context) {
try {
return new Response(JSON.stringify({
success: true,
message: "Minimal test works",
timestamp: new Date().toISOString()
}), {
headers: {
'Content-Type': 'application/json'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,163 +0,0 @@
/**
* 恢复功能检查工具
*/
import { getDatabase } from '../../utils/databaseAdapter.js';
export async function onRequest(context) {
var env = context.env;
var url = new URL(context.request.url);
var action = url.searchParams.get('action') || 'status';
try {
var db = getDatabase(env);
var results = {
action: action,
timestamp: new Date().toISOString()
};
if (action === 'status') {
// 检查当前数据库状态
// 统计文件数量
var fileCount = 0;
var cursor = null;
while (true) {
var response = await db.listFiles({
limit: 1000,
cursor: cursor
});
if (!response || !response.keys || !Array.isArray(response.keys)) {
break;
}
for (var item of response.keys) {
if (!item.name.startsWith('manage@') && !item.name.startsWith('chunk_')) {
if (item.metadata && item.metadata.TimeStamp) {
fileCount++;
}
}
}
cursor = response.cursor;
if (!cursor) break;
}
// 统计设置数量
var settingsResponse = await db.listSettings({});
var settingsCount = 0;
if (settingsResponse && settingsResponse.keys) {
settingsCount = settingsResponse.keys.length;
}
// 检查关键设置
var keySettings = {};
var settingKeys = ['manage@sysConfig@page', 'manage@sysConfig@security'];
for (var key of settingKeys) {
try {
var value = await db.get(key);
keySettings[key] = {
exists: !!value,
length: value ? value.length : 0
};
} catch (error) {
keySettings[key] = {
exists: false,
error: error.message
};
}
}
results.status = {
fileCount: fileCount,
settingsCount: settingsCount,
keySettings: keySettings
};
} else if (action === 'test') {
// 测试恢复一个简单的设置
var testKey = 'test_restore_' + Date.now();
var testValue = 'test_value_' + Date.now();
try {
// 写入测试数据
await db.put(testKey, testValue);
// 读取验证
var retrieved = await db.get(testKey);
// 清理测试数据
await db.delete(testKey);
results.test = {
success: true,
valueMatch: retrieved === testValue,
testKey: testKey,
testValue: testValue,
retrievedValue: retrieved
};
} catch (error) {
results.test = {
success: false,
error: error.message
};
}
} else if (action === 'sample') {
// 提供样本恢复数据
// 创建样本数据
var testFileKey = "test_file_" + Date.now();
var testSettingKey = "test_setting_" + Date.now();
var testSettingValue = "test_value_" + Date.now();
results.sampleData = {
timestamp: Date.now(),
version: "2.0.2",
data: {
fileCount: 1,
files: {},
settings: {}
}
};
// 动态添加文件和设置
results.sampleData.data.files[testFileKey] = {
metadata: {
FileName: "test.jpg",
FileType: "image/jpeg",
FileSize: "0.1",
TimeStamp: Date.now(),
Channel: "Test",
ListType: "None"
},
value: null
};
results.sampleData.data.settings[testSettingKey] = testSettingValue;
results.instructions = {
usage: "Use this sample data to test restore functionality",
endpoint: "/api/manage/sysConfig/backup",
method: "POST with action=restore"
};
}
return new Response(JSON.stringify(results, null, 2), {
headers: {
'Content-Type': 'application/json'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,104 +0,0 @@
/**
* 恢复样本数据测试
*/
import { getDatabase } from '../../utils/databaseAdapter.js';
export async function onRequest(context) {
var env = context.env;
try {
var db = getDatabase(env);
// 使用您提供的样本数据
var sampleSettings = {
"manage@sysConfig@page": "{\"config\":[{\"id\":\"siteTitle\",\"label\":\"网站标题\",\"placeholder\":\"Sanyue ImgHub\",\"category\":\"全局设置\",\"value\":\"ChuZhong ImgHub\"},{\"id\":\"siteIcon\",\"label\":\"网站图标\",\"category\":\"全局设置\"},{\"id\":\"ownerName\",\"label\":\"图床名称\",\"placeholder\":\"Sanyue ImgHub\",\"category\":\"全局设置\",\"value\":\"ChuZhong ImgHub\"},{\"id\":\"logoUrl\",\"label\":\"图床Logo\",\"category\":\"全局设置\"},{\"id\":\"bkInterval\",\"label\":\"背景切换间隔\",\"placeholder\":\"3000\",\"tooltip\":\"单位:毫秒 ms\",\"category\":\"全局设置\"},{\"id\":\"bkOpacity\",\"label\":\"背景图透明度\",\"placeholder\":\"1\",\"tooltip\":\"0-1 之间的小数\",\"category\":\"全局设置\"},{\"id\":\"urlPrefix\",\"label\":\"默认URL前缀\",\"tooltip\":\"自定义URL前缀https://img.a.com/file/,留空则使用当前域名 <br/> 设置后将应用于客户端和管理端\",\"category\":\"全局设置\"},{\"id\":\"announcement\",\"label\":\"公告\",\"tooltip\":\"支持HTML标签\",\"category\":\"客户端设置\"},{\"id\":\"defaultUploadChannel\",\"label\":\"默认上传渠道\",\"type\":\"select\",\"options\":[{\"label\":\"Telegram\",\"value\":\"telegram\"},{\"label\":\"Cloudflare R2\",\"value\":\"cfr2\"},{\"label\":\"S3\",\"value\":\"s3\"}],\"placeholder\":\"telegram\",\"category\":\"客户端设置\"},{\"id\":\"defaultUploadFolder\",\"label\":\"默认上传目录\",\"placeholder\":\"/ 开头的合法目录,不能包含特殊字符, 默认为根目录\",\"category\":\"客户端设置\"},{\"id\":\"defaultUploadNameType\",\"label\":\"默认命名方式\",\"type\":\"select\",\"options\":[{\"label\":\"默认\",\"value\":\"default\"},{\"label\":\"仅前缀\",\"value\":\"index\"},{\"label\":\"仅原名\",\"value\":\"origin\"},{\"label\":\"短链接\",\"value\":\"short\"}],\"placeholder\":\"default\",\"category\":\"客户端设置\"},{\"id\":\"loginBkImg\",\"label\":\"登录页背景图\",\"tooltip\":\"1.填写 bing 使用必应壁纸轮播 <br/> 2.填写 [\\\"url1\\\",\\\"url2\\\"] 使用多张图片轮播 <br/> 3.填写 [\\\"url\\\"] 使用单张图片\",\"category\":\"客户端设置\"},{\"id\":\"uploadBkImg\",\"label\":\"上传页背景图\",\"tooltip\":\"1.填写 bing 使用必应壁纸轮播 <br/> 2.填写 [\\\"url1\\\",\\\"url2\\\"] 使用多张图片轮播 <br/> 3.填写 [\\\"url\\\"] 使用单张图片\",\"category\":\"客户端设置\"},{\"id\":\"footerLink\",\"label\":\"页脚传送门链接\",\"category\":\"客户端设置\"},{\"id\":\"disableFooter\",\"label\":\"隐藏页脚\",\"type\":\"boolean\",\"default\":false,\"category\":\"客户端设置\",\"value\":false},{\"id\":\"adminLoginBkImg\",\"label\":\"登录页背景图\",\"tooltip\":\"1.填写 bing 使用必应壁纸轮播 <br/> 2.填写 [\\\"url1\\\",\\\"url2\\\"] 使用多张图片轮播 <br/> 3.填写 [\\\"url\\\"] 使用单张图片\",\"category\":\"管理端设置\"}]}",
"manage@sysConfig@security": "{\"auth\":{\"user\":{\"authCode\":\"ccxy211008\"},\"admin\":{\"adminUsername\":\"chuzhong\",\"adminPassword\":\"ccxy211008\"}},\"upload\":{\"moderate\":{\"enabled\":false,\"channel\":\"default\",\"moderateContentApiKey\":\"\",\"nsfwApiPath\":\"\"}},\"access\":{\"allowedDomains\":\"\",\"whiteListMode\":false}}"
};
var results = {
beforeRestore: {},
afterRestore: {},
restoreResults: [],
errors: []
};
// 检查恢复前的状态
for (var key in sampleSettings) {
try {
var beforeValue = await db.get(key);
results.beforeRestore[key] = {
exists: !!beforeValue,
length: beforeValue ? beforeValue.length : 0
};
} catch (error) {
results.beforeRestore[key] = { error: error.message };
}
}
// 执行恢复
for (var key in sampleSettings) {
try {
var value = sampleSettings[key];
console.log('恢复设置:', key, '长度:', value.length);
await db.put(key, value);
// 立即验证
var retrieved = await db.get(key);
var success = retrieved === value;
results.restoreResults.push({
key: key,
success: success,
originalLength: value.length,
retrievedLength: retrieved ? retrieved.length : 0,
matches: success
});
if (!success) {
console.error('恢复验证失败:', key);
console.error('原始长度:', value.length);
console.error('检索长度:', retrieved ? retrieved.length : 0);
}
} catch (error) {
results.errors.push({
key: key,
error: error.message
});
console.error('恢复失败:', key, error);
}
}
// 检查恢复后的状态
for (var key in sampleSettings) {
try {
var afterValue = await db.get(key);
results.afterRestore[key] = {
exists: !!afterValue,
length: afterValue ? afterValue.length : 0
};
} catch (error) {
results.afterRestore[key] = { error: error.message };
}
}
return new Response(JSON.stringify(results, null, 2), {
headers: {
'Content-Type': 'application/json'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,87 +0,0 @@
/**
* 设置检查工具
*/
import { getDatabase } from '../../utils/databaseAdapter.js';
export async function onRequest(context) {
var env = context.env;
try {
var db = getDatabase(env);
var results = {
allSettings: [],
expectedSettings: [
'manage@sysConfig@page',
'manage@sysConfig@security',
'manage@sysConfig@upload',
'manage@sysConfig@others'
],
missingSettings: [],
existingSettings: {}
};
// 列出所有设置
var allSettings = await db.listSettings({});
results.allSettings = allSettings.keys.map(function(item) {
return {
key: item.name,
hasValue: !!item.value,
valueLength: item.value ? item.value.length : 0
};
});
// 检查每个预期的设置
for (var i = 0; i < results.expectedSettings.length; i++) {
var settingKey = results.expectedSettings[i];
try {
var value = await db.get(settingKey);
if (value) {
results.existingSettings[settingKey] = {
exists: true,
length: value.length,
preview: value.substring(0, 200) + (value.length > 200 ? '...' : '')
};
} else {
results.missingSettings.push(settingKey);
results.existingSettings[settingKey] = {
exists: false
};
}
} catch (error) {
results.existingSettings[settingKey] = {
exists: false,
error: error.message
};
}
}
// 检查是否有其他manage@开头的设置
var manageSettings = await db.listSettings({ prefix: 'manage@' });
results.manageSettings = manageSettings.keys.map(function(item) {
return {
key: item.name,
hasValue: !!item.value,
valueLength: item.value ? item.value.length : 0
};
});
return new Response(JSON.stringify(results, null, 2), {
headers: {
'Content-Type': 'application/json'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,80 +0,0 @@
/**
* 上传功能测试工具
*/
import { getDatabase } from '../../utils/databaseAdapter.js';
import { fetchUploadConfig, fetchSecurityConfig } from '../../utils/sysConfig.js';
export async function onRequest(context) {
var env = context.env;
try {
var results = {
databaseCheck: null,
configCheck: null,
uploadConfigCheck: null,
securityConfigCheck: null
};
// 检查数据库
try {
var db = getDatabase(env);
results.databaseCheck = {
success: true,
type: db.constructor.name || 'Unknown'
};
} catch (error) {
results.databaseCheck = {
success: false,
error: error.message
};
}
// 检查上传配置
try {
var uploadConfig = await fetchUploadConfig(env);
results.uploadConfigCheck = {
success: true,
hasChannels: !!(uploadConfig.telegram && uploadConfig.telegram.channels),
channelCount: uploadConfig.telegram ? uploadConfig.telegram.channels.length : 0
};
} catch (error) {
results.uploadConfigCheck = {
success: false,
error: error.message
};
}
// 检查安全配置
try {
var securityConfig = await fetchSecurityConfig(env);
results.securityConfigCheck = {
success: true,
hasAuth: !!(securityConfig.auth),
hasUpload: !!(securityConfig.upload)
};
} catch (error) {
results.securityConfigCheck = {
success: false,
error: error.message
};
}
return new Response(JSON.stringify(results, null, 2), {
headers: {
'Content-Type': 'application/json'
}
});
} catch (error) {
return new Response(JSON.stringify({
error: error.message,
stack: error.stack
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}

View File

@@ -1,5 +1,5 @@
import { fetchSecurityConfig } from "../../utils/sysConfig";
import { checkDatabaseConfig, errorHandling } from "../../utils/middleware";
import { checkDatabaseConfig } from "../../utils/middleware";
import { validateApiToken } from "../../utils/tokenValidator";
import { getDatabase } from "../../utils/databaseAdapter.js";
@@ -7,6 +7,14 @@ let securityConfig = {}
let basicUser = ""
let basicPass = ""
async function errorHandling(context) {
try {
return await context.next();
} catch (err) {
return new Response(`${err.message}\n${err.stack}`, { status: 500 });
}
}
function basicAuthentication(request) {
const Authorization = request.headers.get('Authorization');
@@ -140,22 +148,4 @@ async function authentication(context) {
}
// 暂时禁用中间件来排查问题
// export const onRequest = [checkDatabaseConfig, errorHandling, authentication];
// 临时的简单中间件
export async function onRequest(context) {
try {
return await context.next();
} catch (error) {
console.error('Manage middleware error:', error);
return new Response(JSON.stringify({
error: 'Manage middleware error: ' + error.message
}), {
status: 500,
headers: {
'Content-Type': 'application/json'
}
});
}
}
export const onRequest = [checkDatabaseConfig, errorHandling, authentication];

View File

@@ -4,11 +4,7 @@ export async function onRequest(context) {
// API Token管理支持创建、删除、列出Token
const {
request,
env,
params,
waitUntil,
next,
data,
env
} = context;
const db = getDatabase(env);

View File

@@ -13,8 +13,8 @@ export async function onRequest(context) {
try {
const kv = getDatabase(env);
let list = await kv.get("manage@blockipList");
const db = getDatabase(env);
let list = await db.get("manage@blockipList");
if (list == null) {
list = [];
} else {
@@ -29,7 +29,7 @@ export async function onRequest(context) {
//将ip添加到list中
list.push(ip);
await kv.put("manage@blockipList", list.join(","));
await db.put("manage@blockipList", list.join(","));
return new Response('Add ip to block list successfully', { status: 200 });
} catch (e) {
return new Response('Add ip to block list failed', { status: 500 });

View File

@@ -11,8 +11,8 @@ export async function onRequest(context) {
data, // arbitrary space for passing data between middlewares
} = context;
try {
const kv = getDatabase(env);
let list = await kv.get("manage@blockipList");
const db = getDatabase(env);
let list = await db.get("manage@blockipList");
if (list == null) {
list = [];
} else {
@@ -27,7 +27,7 @@ export async function onRequest(context) {
//将ip从list中删除
list = list.filter(item => item !== ip);
await kv.put("manage@blockipList", list.join(","));
await db.put("manage@blockipList", list.join(","));
return new Response('delete ip from block ip list successfully', { status: 200 });
} catch (e) {
return new Response('delete ip from block ip list failed', { status: 500 });

View File

@@ -1,4 +1,5 @@
import { readIndex, getIndexInfo, rebuildIndex, getIndexStorageStats } from '../../utils/indexManager.js';
import { readIndex, mergeOperationsToIndex, deleteAllOperations, rebuildIndex,
getIndexInfo, getIndexStorageStats } from '../../utils/indexManager.js';
import { getDatabase } from '../../utils/databaseAdapter.js';
export async function onRequest(context) {
@@ -41,6 +42,24 @@ export async function onRequest(context) {
});
}
// 特殊操作:合并挂起的原子操作到索引
if (action === 'merge-operations') {
waitUntil(mergeOperationsToIndex(context));
return new Response('Operations merged into index asynchronously', {
headers: { "Content-Type": "text/plain" }
});
}
// 特殊操作:清除所有原子操作
if (action === 'delete-operations') {
waitUntil(deleteAllOperations(context));
return new Response('All operations deleted asynchronously', {
headers: { "Content-Type": "text/plain" }
});
}
// 特殊操作:获取索引存储信息
if (action === 'index-storage-stats') {
const stats = await getIndexStorageStats(context);
@@ -88,13 +107,13 @@ export async function onRequest(context) {
// 索引读取失败,直接从 KV 中获取所有文件记录
if (!result.success) {
const kvRecords = await getAllFileRecords(context.env, dir);
const dbRecords = await getAllFileRecords(context.env, dir);
return new Response(JSON.stringify({
files: kvRecords.files,
directories: kvRecords.directories,
totalCount: kvRecords.totalCount,
returnedCount: kvRecords.returnedCount,
files: dbRecords.files,
directories: dbRecords.directories,
totalCount: dbRecords.totalCount,
returnedCount: dbRecords.returnedCount,
indexLastUpdated: Date.now(),
isIndexedResponse: false // 标记这是来自 KV 的响应
}), {
@@ -167,31 +186,31 @@ async function getAllFileRecords(env, dir) {
allRecords.push(item);
}
if (!cursor) break;
// 添加协作点
await new Promise(resolve => setTimeout(resolve, 10));
}
// 提取目录信息
const directories = new Set();
const filteredRecords = [];
allRecords.forEach(item => {
const subDir = item.name.substring(dir.length);
const firstSlashIndex = subDir.indexOf('/');
if (firstSlashIndex !== -1) {
directories.add(dir + subDir.substring(0, firstSlashIndex));
} else {
filteredRecords.push(item);
if (!cursor) break;
// 添加协作点
await new Promise(resolve => setTimeout(resolve, 10));
}
});
return {
files: filteredRecords,
directories: Array.from(directories),
totalCount: allRecords.length,
returnedCount: filteredRecords.length
};
// 提取目录信息
const directories = new Set();
const filteredRecords = [];
allRecords.forEach(item => {
const subDir = item.name.substring(dir.length);
const firstSlashIndex = subDir.indexOf('/');
if (firstSlashIndex !== -1) {
directories.add(dir + subDir.substring(0, firstSlashIndex));
} else {
filteredRecords.push(item);
}
});
return {
files: filteredRecords,
directories: Array.from(directories),
totalCount: allRecords.length,
returnedCount: filteredRecords.length
};
} catch (error) {
console.error('Error in getAllFileRecords:', error);

View File

@@ -1,262 +0,0 @@
/**
* 数据迁移工具
* 用于将KV数据迁移到D1数据库
*/
import { getDatabase, checkDatabaseConfig } from '../../utils/databaseAdapter.js';
export async function onRequest(context) {
const { request, env } = context;
const url = new URL(request.url);
const action = url.searchParams.get('action');
try {
switch (action) {
case 'check':
return await handleCheck(env);
case 'migrate':
return await handleMigrate(env);
case 'status':
return await handleStatus(env);
default:
return new Response(JSON.stringify({ error: '不支持的操作' }), {
status: 400,
headers: { 'Content-Type': 'application/json' }
});
}
} catch (error) {
console.error('迁移操作错误:', error);
return new Response(JSON.stringify({ error: '操作失败: ' + error.message }), {
status: 500,
headers: { 'Content-Type': 'application/json' }
});
}
}
// 检查迁移环境
async function handleCheck(env) {
const dbConfig = checkDatabaseConfig(env);
const result = {
hasKV: dbConfig.hasKV,
hasD1: dbConfig.hasD1,
canMigrate: dbConfig.hasKV && dbConfig.hasD1,
currentDatabase: dbConfig.usingD1 ? 'D1' : (dbConfig.usingKV ? 'KV' : 'None'),
message: ''
};
if (!result.canMigrate) {
if (!result.hasKV) {
result.message = '未找到KV存储无法进行迁移';
} else if (!result.hasD1) {
result.message = '未找到D1数据库请先配置D1数据库';
}
} else {
result.message = '环境检查通过,可以开始迁移';
}
return new Response(JSON.stringify(result), {
headers: { 'Content-Type': 'application/json' }
});
}
// 执行迁移
async function handleMigrate(env) {
const dbConfig = checkDatabaseConfig(env);
if (!dbConfig.hasKV || !dbConfig.hasD1) {
return new Response(JSON.stringify({
error: '迁移环境不满足要求',
hasKV: dbConfig.hasKV,
hasD1: dbConfig.hasD1
}), {
status: 400,
headers: { 'Content-Type': 'application/json' }
});
}
const migrationResult = {
startTime: new Date().toISOString(),
files: { migrated: 0, failed: 0, errors: [] },
settings: { migrated: 0, failed: 0, errors: [] },
operations: { migrated: 0, failed: 0, errors: [] },
status: 'running'
};
try {
// 1. 迁移文件数据
console.log('开始迁移文件数据...');
await migrateFiles(env, migrationResult);
// 2. 迁移系统设置
console.log('开始迁移系统设置...');
await migrateSettings(env, migrationResult);
// 3. 迁移索引操作
console.log('开始迁移索引操作...');
await migrateIndexOperations(env, migrationResult);
migrationResult.status = 'completed';
migrationResult.endTime = new Date().toISOString();
} catch (error) {
migrationResult.status = 'failed';
migrationResult.error = error.message;
migrationResult.endTime = new Date().toISOString();
}
return new Response(JSON.stringify(migrationResult), {
headers: { 'Content-Type': 'application/json' }
});
}
// 迁移文件数据
async function migrateFiles(env, result) {
const db = getDatabase(env);
let cursor = null;
const batchSize = 100;
while (true) {
const response = await getDatabase(env).list({
limit: batchSize,
cursor: cursor
});
for (const item of response.keys) {
// 跳过管理相关的键
if (item.name.startsWith('manage@') || item.name.startsWith('chunk_')) {
continue;
}
try {
const fileData = await getDatabase(env).getWithMetadata(item.name);
if (fileData && fileData.metadata) {
await db.putFile(item.name, fileData.value || '', {
metadata: fileData.metadata
});
result.files.migrated++;
}
} catch (error) {
result.files.failed++;
result.files.errors.push({
file: item.name,
error: error.message
});
console.error(`迁移文件 ${item.name} 失败:`, error);
}
}
cursor = response.cursor;
if (!cursor) break;
// 添加延迟避免过载
await new Promise(resolve => setTimeout(resolve, 10));
}
}
// 迁移系统设置
async function migrateSettings(env, result) {
const db = getDatabase(env);
const settingsList = await getDatabase(env).list({ prefix: 'manage@' });
for (const item of settingsList.keys) {
// 跳过索引相关的键
if (item.name.startsWith('manage@index')) {
continue;
}
try {
const value = await getDatabase(env).get(item.name);
if (value) {
await db.putSetting(item.name, value);
result.settings.migrated++;
}
} catch (error) {
result.settings.failed++;
result.settings.errors.push({
setting: item.name,
error: error.message
});
console.error(`迁移设置 ${item.name} 失败:`, error);
}
}
}
// 迁移索引操作
async function migrateIndexOperations(env, result) {
const db = getDatabase(env);
const operationPrefix = 'manage@index@operation_';
const operationsList = await getDatabase(env).list({ prefix: operationPrefix });
for (const item of operationsList.keys) {
try {
const operationData = await getDatabase(env).get(item.name);
if (operationData) {
const operation = JSON.parse(operationData);
const operationId = item.name.replace(operationPrefix, '');
await db.putIndexOperation(operationId, operation);
result.operations.migrated++;
}
} catch (error) {
result.operations.failed++;
result.operations.errors.push({
operation: item.name,
error: error.message
});
console.error(`迁移操作 ${item.name} 失败:`, error);
}
}
}
// 获取迁移状态
async function handleStatus(env) {
const dbConfig = checkDatabaseConfig(env);
let fileCount = { kv: 0, d1: 0 };
let settingCount = { kv: 0, d1: 0 };
try {
// 统计KV中的数据
if (dbConfig.hasKV) {
const kvFiles = await getDatabase(env).list({ limit: 1000 });
fileCount.kv = kvFiles.keys.filter(k =>
!k.name.startsWith('manage@') && !k.name.startsWith('chunk_')
).length;
const kvSettings = await getDatabase(env).list({ prefix: 'manage@', limit: 1000 });
settingCount.kv = kvSettings.keys.filter(k =>
!k.name.startsWith('manage@index')
).length;
}
// 统计D1中的数据
if (dbConfig.hasD1) {
const db = getDatabase(env);
const fileCountStmt = db.db.prepare('SELECT COUNT(*) as count FROM files');
const fileResult = await fileCountStmt.first();
fileCount.d1 = fileResult.count;
const settingCountStmt = db.db.prepare('SELECT COUNT(*) as count FROM settings');
const settingResult = await settingCountStmt.first();
settingCount.d1 = settingResult.count;
}
} catch (error) {
console.error('获取状态失败:', error);
}
return new Response(JSON.stringify({
database: dbConfig,
counts: {
files: fileCount,
settings: settingCount
},
migrationNeeded: fileCount.kv > 0 && fileCount.d1 === 0
}), {
headers: { 'Content-Type': 'application/json' }
});
}

View File

@@ -2,6 +2,7 @@ import { S3Client, CopyObjectCommand, DeleteObjectCommand } from "@aws-sdk/clien
import { purgeCFCache } from "../../../utils/purgeCache";
import { moveFileInIndex, batchMoveFilesInIndex } from "../../../utils/indexManager.js";
import { getDatabase } from '../../../utils/databaseAdapter.js';
export async function onRequest(context) {
const { request, env, params, waitUntil } = context;
@@ -124,8 +125,10 @@ export async function onRequest(context) {
// 移动单个文件的核心函数
async function moveFile(env, fileId, newFileId, cdnUrl, url) {
try {
const db = getDatabase(env);
// 读取图片信息
const img = await getDatabase(env).getWithMetadata(fileId);
const img = await db.getWithMetadata(fileId);
// 如果是R2渠道的图片需要移动R2中对应的图片
if (img.metadata?.Channel === 'CloudflareR2') {
@@ -168,8 +171,8 @@ async function moveFile(env, fileId, newFileId, cdnUrl, url) {
img.metadata.Folder = folderPath;
// 更新KV存储
await getDatabase(env).put(newFileId, img.value, { metadata: img.metadata });
await getDatabase(env).delete(fileId);
await db.put(newFileId, img.value, { metadata: img.metadata });
await db.delete(fileId);
// 清除CDN缓存
await purgeCFCache(env, cdnUrl);

View File

@@ -32,6 +32,8 @@ export async function onRequest(context) {
async function handleBackup(context) {
const { env } = context;
try {
const db = getDatabase(env);
const backupData = {
timestamp: Date.now(),
version: '2.0.2',
@@ -42,47 +44,16 @@ async function handleBackup(context) {
}
};
// 直接从数据库读取所有文件信息,不依赖索引
const db = getDatabase(env);
let allFiles = [];
let cursor = null;
// 分批获取所有文件
while (true) {
const response = await db.listFiles({
limit: 1000,
cursor: cursor
});
if (!response || !response.keys || !Array.isArray(response.keys)) {
break;
}
for (const item of response.keys) {
// 跳过管理相关的键和分块数据
if (item.name.startsWith('manage@') || item.name.startsWith('chunk_')) {
continue;
}
// 跳过没有元数据的文件
if (!item.metadata || !item.metadata.TimeStamp) {
continue;
}
allFiles.push({
id: item.name,
metadata: item.metadata
});
}
cursor = response.cursor;
if (!cursor) break;
}
backupData.data.fileCount = allFiles.length;
// 首先从索引中读取所有文件信息
const indexResult = await readIndex(context, {
count: -1, // 获取所有文件
start: 0,
includeSubdirFiles: true // 包含子目录下的文件
});
backupData.data.fileCount = indexResult.files.length;
// 备份文件数据
for (const file of allFiles) {
for (const file of indexResult.files) {
const fileId = file.id;
const metadata = file.metadata;
@@ -112,32 +83,17 @@ async function handleBackup(context) {
}
// 备份系统设置
// db 已经在上面定义了
// 备份所有设置不仅仅是manage@开头的
const allSettingsList = await db.listSettings({});
for (const key of allSettingsList.keys) {
const settingsList = await db.list({ prefix: 'manage@' });
for (const key of settingsList.keys) {
// 忽略索引文件
if (key.name.startsWith('manage@index')) continue;
const setting = key.value;
const setting = await db.get(key.name);
if (setting) {
backupData.data.settings[key.name] = setting;
}
}
// 额外确保备份manage@开头的设置
const manageSettingsList = await db.listSettings({ prefix: 'manage@' });
for (const key of manageSettingsList.keys) {
// 忽略索引文件
if (key.name.startsWith('manage@index')) continue;
const setting = key.value;
if (setting && !backupData.data.settings[key.name]) {
backupData.data.settings[key.name] = setting;
}
}
const backupJson = JSON.stringify(backupData, null, 2);
return new Response(backupJson, {
@@ -155,6 +111,8 @@ async function handleBackup(context) {
// 处理恢复操作
async function handleRestore(request, env) {
try {
const db = getDatabase(env);
const contentType = request.headers.get('content-type');
if (!contentType || !contentType.includes('application/json')) {
@@ -178,56 +136,30 @@ async function handleRestore(request, env) {
let restoredSettings = 0;
// 恢复文件数据
const db = getDatabase(env);
const fileEntries = Object.entries(backupData.data.files);
const batchSize = 50; // 批量处理,避免超时
for (let i = 0; i < fileEntries.length; i += batchSize) {
const batch = fileEntries.slice(i, i + batchSize);
for (const [key, fileData] of batch) {
try {
if (fileData.value) {
// 对于有value的文件如telegram分块文件恢复完整数据
await db.put(key, fileData.value, {
metadata: fileData.metadata
});
} else if (fileData.metadata) {
// 只恢复元数据
await db.put(key, '', {
metadata: fileData.metadata
});
}
restoredFiles++;
} catch (error) {
console.error(`恢复文件 ${key} 失败:`, error);
for (const [key, fileData] of Object.entries(backupData.data.files)) {
try {
if (fileData.value) {
// 对于有value的文件如telegram分块文件恢复完整数据
await db.put(key, fileData.value, {
metadata: fileData.metadata
});
} else if (fileData.metadata) {
// 只恢复元数据
await db.put(key, '', {
metadata: fileData.metadata
});
}
}
// 每批处理后短暂暂停,避免过载
if (i + batchSize < fileEntries.length) {
await new Promise(resolve => setTimeout(resolve, 10));
restoredFiles++;
} catch (error) {
console.error(`恢复文件 ${key} 失败:`, error);
}
}
// 恢复系统设置
const settingEntries = Object.entries(backupData.data.settings);
console.log(`开始恢复 ${settingEntries.length} 个设置`);
for (const [key, value] of settingEntries) {
for (const [key, value] of Object.entries(backupData.data.settings)) {
try {
console.log(`恢复设置: ${key}, 长度: ${value.length}`);
await db.put(key, value);
// 验证是否成功保存
const retrieved = await db.get(key);
if (retrieved === value) {
restoredSettings++;
console.log(`设置 ${key} 恢复成功`);
} else {
console.error(`设置 ${key} 恢复后验证失败`);
console.error(`原始长度: ${value.length}, 检索长度: ${retrieved ? retrieved.length : 'null'}`);
}
restoredSettings++;
} catch (error) {
console.error(`恢复设置 ${key} 失败:`, error);
}

View File

@@ -1,3 +1,3 @@
import { checkKVConfig } from '../utils/middleware';
import { checkDatabaseConfig } from '../utils/middleware';
export const onRequest = [checkKVConfig];
export const onRequest = [checkDatabaseConfig];

View File

@@ -1,3 +1,3 @@
import { checkKVConfig } from '../utils/middleware';
import { checkDatabaseConfig } from '../utils/middleware';
export const onRequest = [checkKVConfig];
export const onRequest = [checkDatabaseConfig];

View File

@@ -1,3 +1,3 @@
import { errorHandling, telemetryData, checkKVConfig } from '../utils/middleware';
import { errorHandling, telemetryData, checkDatabaseConfig } from '../utils/middleware';
export const onRequest = [checkKVConfig, errorHandling, telemetryData];
export const onRequest = [checkDatabaseConfig, errorHandling, telemetryData];

View File

@@ -93,9 +93,10 @@ export async function handleChunkUpload(context) {
return createResponse('Error: Missing chunk upload parameters', { status: 400 });
}
const db = getDatabase(env);
// 验证上传会话
const sessionKey = `upload_session_${uploadId}`;
const sessionData = await getDatabase(env).get(sessionKey);
const sessionData = await db.get(sessionKey);
if (!sessionData) {
return createResponse('Error: Invalid or expired upload session', { status: 400 });
}
@@ -135,7 +136,7 @@ export async function handleChunkUpload(context) {
};
// 立即保存分块记录和数据,设置过期时间
await getDatabase(env).put(chunkKey, chunkData, {
await db.put(chunkKey, chunkData, {
metadata: initialChunkMetadata,
expirationTtl: 3600 // 1小时过期
});
@@ -193,6 +194,7 @@ export async function handleCleanupRequest(context, uploadId, totalChunks) {
// 带超时保护的异步上传分块到存储端
async function uploadChunkToStorageWithTimeout(context, chunkIndex, totalChunks, uploadId, originalFileName, originalFileType, uploadChannel) {
const { env } = context;
const db = getDatabase(env);
const chunkKey = `chunk_${uploadId}_${chunkIndex.toString().padStart(3, '0')}`;
const UPLOAD_TIMEOUT = 180000; // 3分钟超时
@@ -213,7 +215,7 @@ async function uploadChunkToStorageWithTimeout(context, chunkIndex, totalChunks,
// 超时或失败时,更新状态为超时/失败
try {
const chunkRecord = await getDatabase(env).getWithMetadata(chunkKey, { type: 'arrayBuffer' });
const chunkRecord = await db.getWithMetadata(chunkKey, { type: 'arrayBuffer' });
if (chunkRecord && chunkRecord.metadata) {
const isTimeout = error.message === 'Upload timeout';
const errorMetadata = {
@@ -225,7 +227,7 @@ async function uploadChunkToStorageWithTimeout(context, chunkIndex, totalChunks,
};
// 保留原始数据以便重试
await getDatabase(env).put(chunkKey, chunkRecord.value, {
await db.put(chunkKey, chunkRecord.value, {
metadata: errorMetadata,
expirationTtl: 3600
});
@@ -239,16 +241,17 @@ async function uploadChunkToStorageWithTimeout(context, chunkIndex, totalChunks,
// 异步上传分块到存储端,失败自动重试
async function uploadChunkToStorage(context, chunkIndex, totalChunks, uploadId, originalFileName, originalFileType, uploadChannel) {
const { env } = context;
const db = getDatabase(env);
const chunkKey = `chunk_${uploadId}_${chunkIndex.toString().padStart(3, '0')}`;
const MAX_RETRIES = 3;
try {
// 从KV获取分块数据和metadata
const chunkRecord = await getDatabase(env).getWithMetadata(chunkKey, { type: 'arrayBuffer' });
// 从数据库分块数据和metadata
const chunkRecord = await db.getWithMetadata(chunkKey, { type: 'arrayBuffer' });
if (!chunkRecord || !chunkRecord.value) {
console.error(`Chunk ${chunkIndex} data not found in KV`);
console.error(`Chunk ${chunkIndex} data not found in database`);
return;
}
@@ -277,7 +280,7 @@ async function uploadChunkToStorage(context, chunkIndex, totalChunks, uploadId,
};
// 只保存metadata不保存原始数据设置过期时间
await getDatabase(env).put(chunkKey, '', {
await db.put(chunkKey, '', {
metadata: updatedMetadata,
expirationTtl: 3600 // 1小时过期
});
@@ -295,7 +298,7 @@ async function uploadChunkToStorage(context, chunkIndex, totalChunks, uploadId,
};
// 保留原始数据以便重试,设置过期时间
await getDatabase(env).put(chunkKey, chunkData, {
await db.put(chunkKey, chunkData, {
metadata: failedMetadata,
expirationTtl: 3600 // 1小时过期
});
@@ -309,7 +312,7 @@ async function uploadChunkToStorage(context, chunkIndex, totalChunks, uploadId,
// 发生异常时,确保保留原始数据并标记为失败
try {
const chunkRecord = await getDatabase(env).getWithMetadata(chunkKey, { type: 'arrayBuffer' });
const chunkRecord = await db.getWithMetadata(chunkKey, { type: 'arrayBuffer' });
if (chunkRecord && chunkRecord.metadata) {
const errorMetadata = {
...chunkRecord.metadata,
@@ -318,7 +321,7 @@ async function uploadChunkToStorage(context, chunkIndex, totalChunks, uploadId,
failedTime: Date.now()
};
await getDatabase(env).put(chunkKey, chunkRecord.value, {
await db.put(chunkKey, chunkRecord.value, {
metadata: errorMetadata,
expirationTtl: 3600 // 1小时过期
});
@@ -332,6 +335,7 @@ async function uploadChunkToStorage(context, chunkIndex, totalChunks, uploadId,
// 上传单个分块到R2 (Multipart Upload)
async function uploadSingleChunkToR2Multipart(context, chunkData, chunkIndex, totalChunks, uploadId, originalFileName, originalFileType) {
const { env, uploadConfig } = context;
const db = getDatabase(env);
try {
const r2Settings = uploadConfig.cfr2;
@@ -355,7 +359,7 @@ async function uploadSingleChunkToR2Multipart(context, chunkData, chunkIndex, to
};
// 保存multipart info
await getDatabase(env).put(multipartKey, JSON.stringify(multipartInfo), {
await db.put(multipartKey, JSON.stringify(multipartInfo), {
expirationTtl: 3600 // 1小时过期
});
} else {
@@ -365,7 +369,7 @@ async function uploadSingleChunkToR2Multipart(context, chunkData, chunkIndex, to
const maxRetries = 30; // 最多等待60秒
while (!multipartInfoData && retryCount < maxRetries) {
multipartInfoData = await getDatabase(env).get(multipartKey);
multipartInfoData = await db.get(multipartKey);
if (!multipartInfoData) {
// 等待2秒后重试
await new Promise(resolve => setTimeout(resolve, 2000));
@@ -383,7 +387,7 @@ async function uploadSingleChunkToR2Multipart(context, chunkData, chunkIndex, to
}
// 获取multipart info
const multipartInfoData = await getDatabase(env).get(multipartKey);
const multipartInfoData = await db.get(multipartKey);
if (!multipartInfoData) {
return { success: false, error: 'Multipart upload not initialized' };
}
@@ -419,6 +423,7 @@ async function uploadSingleChunkToR2Multipart(context, chunkData, chunkIndex, to
// 上传单个分块到S3 (Multipart Upload)
async function uploadSingleChunkToS3Multipart(context, chunkData, chunkIndex, totalChunks, uploadId, originalFileName, originalFileType) {
const { env, uploadConfig } = context;
const db = getDatabase(env);
try {
const s3Settings = uploadConfig.s3;
@@ -461,7 +466,7 @@ async function uploadSingleChunkToS3Multipart(context, chunkData, chunkIndex, to
};
// 保存multipart info
await getDatabase(env).put(multipartKey, JSON.stringify(multipartInfo), {
await db.put(multipartKey, JSON.stringify(multipartInfo), {
expirationTtl: 3600 // 1小时过期
});
} else {
@@ -471,7 +476,7 @@ async function uploadSingleChunkToS3Multipart(context, chunkData, chunkIndex, to
const maxRetries = 30; // 最多等待60秒
while (!multipartInfoData && retryCount < maxRetries) {
multipartInfoData = await getDatabase(env).get(multipartKey);
multipartInfoData = await db.get(multipartKey);
if (!multipartInfoData) {
// 等待2秒后重试
await new Promise(resolve => setTimeout(resolve, 2000));
@@ -489,7 +494,7 @@ async function uploadSingleChunkToS3Multipart(context, chunkData, chunkIndex, to
}
// 获取multipart info
const multipartInfoData = await getDatabase(env).get(multipartKey);
const multipartInfoData = await db.get(multipartKey);
if (!multipartInfoData) {
return { success: false, error: 'Multipart upload not initialized' };
}
@@ -683,12 +688,13 @@ export async function retryFailedChunks(context, failedChunks, uploadChannel, op
// 重试单个失败的分块
async function retrySingleChunk(context, chunk, uploadChannel, maxRetries = 5, retryTimeout = 60000) {
const { env } = context;
const db = getDatabase(env);
let retryCount = 0;
let lastError = null;
try {
const chunkRecord = await getDatabase(env).getWithMetadata(chunk.key, { type: 'arrayBuffer' });
const chunkRecord = await db.getWithMetadata(chunk.key, { type: 'arrayBuffer' });
if (!chunkRecord || !chunkRecord.value) {
console.error(`Chunk ${chunk.index} data missing for retry`);
return { success: false, chunk, reason: 'data_missing', error: 'Chunk data not found' };
@@ -706,7 +712,7 @@ async function retrySingleChunk(context, chunk, uploadChannel, maxRetries = 5, r
status: 'retrying',
};
await getDatabase(env).put(chunk.key, chunkData, {
await db.put(chunk.key, chunkData, {
metadata: retryMetadata,
expirationTtl: 3600
});
@@ -744,7 +750,7 @@ async function retrySingleChunk(context, chunk, uploadChannel, maxRetries = 5, r
};
// 删除原始数据,只保留上传结果,设置过期时间
await getDatabase(env).put(chunk.key, '', {
await db.put(chunk.key, '', {
metadata: updatedMetadata,
expirationTtl: 3600 // 1小时过期
});
@@ -766,14 +772,14 @@ async function retrySingleChunk(context, chunk, uploadChannel, maxRetries = 5, r
// 更新重试失败状态
try {
const chunkRecord = await getDatabase(env).getWithMetadata(chunk.key, { type: 'arrayBuffer' });
const chunkRecord = await db.getWithMetadata(chunk.key, { type: 'arrayBuffer' });
if (chunkRecord) {
const failedRetryMetadata = {
...chunkRecord.metadata,
status: isTimeout ? 'retry_timeout' : 'retry_failed'
};
await getDatabase(env).put(chunk.key, chunkRecord.value, {
await db.put(chunk.key, chunkRecord.value, {
metadata: failedRetryMetadata,
expirationTtl: 3600
});
@@ -797,10 +803,11 @@ async function retrySingleChunk(context, chunk, uploadChannel, maxRetries = 5, r
// 清理失败的multipart upload
export async function cleanupFailedMultipartUploads(context, uploadId, uploadChannel) {
const { env, uploadConfig } = context;
const db = getDatabase(env);
try {
const multipartKey = `multipart_${uploadId}`;
const multipartInfoData = await getDatabase(env).get(multipartKey);
const multipartInfoData = await db.get(multipartKey);
if (!multipartInfoData) {
return; // 没有multipart upload需要清理
@@ -839,7 +846,7 @@ export async function cleanupFailedMultipartUploads(context, uploadId, uploadCha
}
// 清理multipart info
await getDatabase(env).delete(multipartKey);
await db.delete(multipartKey);
console.log(`Cleaned up failed multipart upload for ${uploadId}`);
} catch (error) {
@@ -852,11 +859,13 @@ export async function cleanupFailedMultipartUploads(context, uploadId, uploadCha
export async function checkChunkUploadStatuses(env, uploadId, totalChunks) {
const chunkStatuses = [];
const currentTime = Date.now();
const db = getDatabase(env);
for (let i = 0; i < totalChunks; i++) {
const chunkKey = `chunk_${uploadId}_${i.toString().padStart(3, '0')}`;
try {
const chunkRecord = await getDatabase(env).getWithMetadata(chunkKey, { type: 'arrayBuffer' });
const chunkRecord = await db.getWithMetadata(chunkKey, { type: 'arrayBuffer' });
if (chunkRecord && chunkRecord.metadata) {
let status = chunkRecord.metadata.status || 'unknown';
@@ -872,7 +881,7 @@ export async function checkChunkUploadStatuses(env, uploadId, totalChunks) {
timeoutDetectedTime: currentTime
};
await getDatabase(env).put(chunkKey, chunkRecord.value, {
await db.put(chunkKey, chunkRecord.value, {
metadata: timeoutMetadata,
expirationTtl: 3600
}).catch(err => console.warn(`Failed to update timeout status for chunk ${i}:`, err));
@@ -930,17 +939,19 @@ export async function checkChunkUploadStatuses(env, uploadId, totalChunks) {
// 清理临时分块数据
export async function cleanupChunkData(env, uploadId, totalChunks) {
try {
const db = getDatabase(env);
for (let i = 0; i < totalChunks; i++) {
const chunkKey = `chunk_${uploadId}_${i.toString().padStart(3, '0')}`;
// 删除KV中的分块记录
await getDatabase(env).delete(chunkKey);
// 删除数据库中的分块记录
await db.delete(chunkKey);
}
// 清理multipart info如果存在
const multipartKey = `multipart_${uploadId}`;
await getDatabase(env).delete(multipartKey);
await db.delete(multipartKey);
} catch (cleanupError) {
console.warn('Failed to cleanup chunk data:', cleanupError);
}
@@ -949,8 +960,10 @@ export async function cleanupChunkData(env, uploadId, totalChunks) {
// 清理上传会话
export async function cleanupUploadSession(env, uploadId) {
try {
const db = getDatabase(env);
const sessionKey = `upload_session_${uploadId}`;
await getDatabase(env).delete(sessionKey);
await db.delete(sessionKey);
console.log(`Cleaned up upload session for ${uploadId}`);
} catch (cleanupError) {
console.warn('Failed to cleanup upload session:', cleanupError);
@@ -960,11 +973,12 @@ export async function cleanupUploadSession(env, uploadId) {
// 强制清理所有相关数据(用于彻底清理失败的上传)
export async function forceCleanupUpload(context, uploadId, totalChunks) {
const { env } = context;
const db = getDatabase(env);
try {
// 读取 session 信息
const sessionKey = `upload_session_${uploadId}`;
const sessionRecord = await getDatabase(env).get(sessionKey);
const sessionRecord = await db.get(sessionKey);
const uploadChannel = sessionRecord ? JSON.parse(sessionRecord).uploadChannel : 'cfr2'; // 默认使用 cfr2
// 清理 multipart upload信息
@@ -975,7 +989,7 @@ export async function forceCleanupUpload(context, uploadId, totalChunks) {
// 清理所有分块
for (let i = 0; i < totalChunks; i++) {
const chunkKey = `chunk_${uploadId}_${i.toString().padStart(3, '0')}`;
cleanupPromises.push(getDatabase(env).delete(chunkKey).catch(err =>
cleanupPromises.push(db.delete(chunkKey).catch(err =>
console.warn(`Failed to delete chunk ${i}:`, err)
));
}
@@ -988,7 +1002,7 @@ export async function forceCleanupUpload(context, uploadId, totalChunks) {
];
keysToCleanup.forEach(key => {
cleanupPromises.push(getDatabase(env).delete(key).catch(err =>
cleanupPromises.push(db.delete(key).catch(err =>
console.warn(`Failed to delete key ${key}:`, err)
));
});
@@ -1004,6 +1018,7 @@ export async function forceCleanupUpload(context, uploadId, totalChunks) {
/* ======= 单个大文件大文件分块上传到Telegram ======= */
export async function uploadLargeFileToTelegram(context, file, fullId, metadata, fileName, fileType, returnLink, tgBotToken, tgChatId, tgChannel) {
const { env, waitUntil } = context;
const db = getDatabase(env);
const CHUNK_SIZE = 20 * 1024 * 1024; // 20MB
const fileSize = file.size;
@@ -1079,8 +1094,8 @@ export async function uploadLargeFileToTelegram(context, file, fullId, metadata,
throw new Error(`Chunk count mismatch: expected ${totalChunks}, got ${chunks.length}`);
}
// 写入最终的KV记录分片信息作为value
await getDatabase(env).put(fullId, chunksData, { metadata });
// 写入最终的数据库记录分片信息作为value
await db.put(fullId, chunksData, { metadata });
// 异步结束上传
waitUntil(endUpload(context, fullId, metadata));

View File

@@ -214,6 +214,7 @@ async function processFileUpload(context, formdata = null) {
// 上传到Cloudflare R2
async function uploadFileToCloudflareR2(context, fullId, metadata, returnLink) {
const { env, waitUntil, uploadConfig, formdata } = context;
const db = getDatabase(env);
// 检查R2数据库是否配置
if (typeof env.img_r2 == "undefined" || env.img_r2 == null || env.img_r2 == "") {
@@ -244,7 +245,6 @@ async function uploadFileToCloudflareR2(context, fullId, metadata, returnLink) {
// 写入数据库
try {
const db = getDatabase(env);
await db.put(fullId, "", {
metadata: metadata,
});
@@ -271,6 +271,8 @@ async function uploadFileToCloudflareR2(context, fullId, metadata, returnLink) {
// 上传到 S3支持自定义端点
async function uploadFileToS3(context, fullId, metadata, returnLink) {
const { env, waitUntil, uploadConfig, securityConfig, url, formdata } = context;
const db = getDatabase(env);
const uploadModerate = securityConfig.upload.moderate;
const s3Settings = uploadConfig.s3;
@@ -339,7 +341,6 @@ async function uploadFileToS3(context, fullId, metadata, returnLink) {
// 图像审查
if (uploadModerate && uploadModerate.enabled) {
try {
const db = getDatabase(env);
await db.put(fullId, "", { metadata });
} catch {
return createResponse("Error: Failed to write to KV database", { status: 500 });
@@ -352,7 +353,6 @@ async function uploadFileToS3(context, fullId, metadata, returnLink) {
// 写入数据库
try {
const db = getDatabase(env);
await db.put(fullId, "", { metadata });
} catch {
return createResponse("Error: Failed to write to database", { status: 500 });
@@ -376,6 +376,7 @@ async function uploadFileToS3(context, fullId, metadata, returnLink) {
// 上传到Telegram
async function uploadFileToTelegram(context, fullId, metadata, fileExt, fileName, fileType, returnLink) {
const { env, waitUntil, uploadConfig, url, formdata } = context;
const db = getDatabase(env);
// 选择一个 Telegram 渠道上传,若负载均衡开启,则随机选择一个;否则选择第一个
const tgSettings = uploadConfig.telegram;
@@ -469,7 +470,6 @@ async function uploadFileToTelegram(context, fullId, metadata, fileExt, fileName
metadata.TgFileId = id;
metadata.TgChatId = tgChatId;
metadata.TgBotToken = tgBotToken;
const db = getDatabase(env);
await db.put(fullId, "", {
metadata: metadata,
});
@@ -491,6 +491,7 @@ async function uploadFileToTelegram(context, fullId, metadata, fileExt, fileName
// 外链渠道
async function uploadFileToExternal(context, fullId, metadata, returnLink) {
const { env, waitUntil, formdata } = context;
const db = getDatabase(env);
// 直接将外链写入metadata
metadata.Channel = "External";
@@ -503,7 +504,6 @@ async function uploadFileToExternal(context, fullId, metadata, returnLink) {
metadata.ExternalLink = extUrl;
// 写入KV数据库
try {
const db = getDatabase(env);
await db.put(fullId, "", {
metadata: metadata,
});

View File

@@ -1,6 +1,7 @@
import { fetchSecurityConfig } from "../utils/sysConfig";
import { purgeCFCache } from "../utils/purgeCache";
import { addFileToIndex } from "../utils/indexManager.js";
import { getDatabase } from '../utils/databaseAdapter.js';
// 统一的响应创建函数
export function createResponse(body, options = {}) {
@@ -206,9 +207,8 @@ export function getUploadIp(request) {
// 检查上传IP是否被封禁
export async function isBlockedUploadIp(env, uploadIp) {
try {
// 使用数据库适配器而不是直接访问KV
const { getDatabase } = await import('../utils/databaseAdapter.js');
const db = getDatabase(env);
let list = await db.get("manage@blockipList");
if (list == null) {
list = [];
@@ -227,19 +227,7 @@ export async function isBlockedUploadIp(env, uploadIp) {
// 构建唯一文件ID
export async function buildUniqueFileId(context, fileName, fileType = 'application/octet-stream') {
const { env, url } = context;
// 获取数据库适配器
const { getDatabase } = await import('../utils/databaseAdapter.js');
let db;
try {
db = getDatabase(env);
} catch (error) {
console.error('Database not configured for buildUniqueFileId:', error);
// 如果数据库未配置生成一个简单的唯一ID
const timestamp = Date.now();
const random = Math.random().toString(36).substring(2, 8);
return `${timestamp}_${random}_${fileName}`;
}
const db = getDatabase(env);
let fileExt = fileName.split('.').pop();
if (!fileExt || fileExt === fileName) {

View File

@@ -3,8 +3,10 @@
* 用于替代原有的KV存储操作
*/
function D1Database(db) {
this.db = db;
class D1Database {
constructor(db) {
this.db = db;
}
}
// ==================== 文件操作 ====================

View File

@@ -14,11 +14,9 @@ export function createDatabaseAdapter(env) {
// 检查是否配置了D1数据库
if (env.DB && typeof env.DB.prepare === 'function') {
// 使用D1数据库
console.log('Using D1 Database');
return new D1Database(env.DB);
} else if (env.img_url && typeof env.img_url.get === 'function') {
// 回退到KV存储
console.log('Using KV Storage (fallback)');
return new KVAdapter(env.img_url);
} else {
console.error('No database configured. Please configure either D1 (env.DB) or KV (env.img_url)');
@@ -169,46 +167,3 @@ export function checkDatabaseConfig(env) {
configured: hasD1 || hasKV
};
}
/**
* 数据库健康检查
* @param {Object} env - 环境变量
* @returns {Promise<Object>} 健康检查结果
*/
export async function healthCheck(env) {
var config = checkDatabaseConfig(env);
if (!config.configured) {
return {
healthy: false,
error: 'No database configured',
config: config
};
}
try {
var db = getDatabase(env);
if (config.usingD1) {
// D1健康检查 - 尝试查询一个简单的表
var stmt = db.db.prepare('SELECT 1 as test');
await stmt.first();
} else {
// KV健康检查 - 尝试列出键
await db.list({ limit: 1 });
}
return {
healthy: true,
config: config
};
} catch (error) {
return {
healthy: false,
error: error.message,
config: config
};
}
}

View File

@@ -1,35 +1,49 @@
/* 索引管理器 - D1数据库版本 */
import { getDatabase } from './databaseAdapter.js';
/* 索引管理器 */
/**
* 文件索引结构(D1数据库存储):
*
* 文件表
* - 直接存储在 files 表中,包含所有文件信息和元数据
*
* 索引元数据表:
* - 存储在 index_metadata 表中
* - 包含 lastUpdated, totalCount, lastOperationId 等信息
*
* 原子操作表:
* - 存储在 index_operations 表中
* - 包含 id, type, timestamp, data, processed 等字段
* 文件索引结构(分块存储):
*
* 索引元数据
* - key: manage@index@meta
* - value: JSON.stringify(metadata)
* - metadata: {
* lastUpdated: 1640995200000,
* totalCount: 1000,
* lastOperationId: "operation_timestamp_uuid",
* chunkCount: 3,
* chunkSize: 10000
* }
*
* 索引分块:
* - key: manage@index_${chunkId} (例如: manage@index_0, manage@index_1, ...)
* - value: JSON.stringify(filesChunk)
* - filesChunk: [
* {
* id: "file_unique_id",
* metadata: {}
* },
* ...
* ]
*
* 原子操作结构(保持不变):
* - key: manage@index@operation_${timestamp}_${uuid}
* - value: JSON.stringify(operation)
* - operation: {
* type: "add" | "remove" | "move" | "batch_add" | "batch_remove" | "batch_move",
* timestamp: 1640995200000,
* data: {
* fileId: "file_unique_id",
* metadata: {}
* // 根据操作类型包含不同的数据
* }
* }
*/
import { getDatabase } from './databaseAdapter.js';
const INDEX_KEY = 'manage@index';
const INDEX_META_KEY = 'manage@index@meta'; // 索引元数据键
const OPERATION_KEY_PREFIX = 'manage@index@operation_';
const INDEX_CHUNK_SIZE = 10000; // 索引分块大小
const KV_LIST_LIMIT = 1000; // KV 列出批量大小
const KV_LIST_LIMIT = 1000; // 数据库列出批量大小
const BATCH_SIZE = 10; // 批量处理大小
/**
@@ -40,11 +54,11 @@ const BATCH_SIZE = 10; // 批量处理大小
*/
export async function addFileToIndex(context, fileId, metadata = null) {
const { env } = context;
const db = getDatabase(env);
try {
if (metadata === null) {
// 如果未传入metadata尝试从数据库中获取
const db = getDatabase(env);
const fileData = await db.getWithMetadata(fileId);
metadata = fileData.metadata || {};
}
@@ -75,6 +89,7 @@ export async function batchAddFilesToIndex(context, files, options = {}) {
try {
const { env } = context;
const { skipExisting = false } = options;
const db = getDatabase(env);
// 处理每个文件的metadata
const processedFiles = [];
@@ -82,10 +97,10 @@ export async function batchAddFilesToIndex(context, files, options = {}) {
const { fileId, metadata } = fileItem;
let finalMetadata = metadata;
// 如果没有提供metadata尝试从KV中获取
// 如果没有提供metadata尝试从数据库中获取
if (!finalMetadata) {
try {
const fileData = await getDatabase(env).getWithMetadata(fileId);
const fileData = await db.getWithMetadata(fileId);
finalMetadata = fileData.metadata || {};
} catch (error) {
console.warn(`Failed to get metadata for file ${fileId}:`, error);
@@ -180,13 +195,14 @@ export async function batchRemoveFilesFromIndex(context, fileIds) {
export async function moveFileInIndex(context, originalFileId, newFileId, newMetadata = null) {
try {
const { env } = context;
const db = getDatabase(env);
// 确定最终的metadata
let finalMetadata = newMetadata;
if (finalMetadata === null) {
// 如果没有提供新metadata尝试从KV中获取
// 如果没有提供新metadata尝试从数据库中获取
try {
const fileData = await getDatabase(env).getWithMetadata(newFileId);
const fileData = await db.getWithMetadata(newFileId);
finalMetadata = fileData.metadata || {};
} catch (error) {
console.warn(`Failed to get metadata for new file ${newFileId}:`, error);
@@ -218,6 +234,7 @@ export async function moveFileInIndex(context, originalFileId, newFileId, newMet
export async function batchMoveFilesInIndex(context, moveOperations) {
try {
const { env } = context;
const db = getDatabase(env);
// 处理每个移动操作的metadata
const processedOperations = [];
@@ -227,9 +244,9 @@ export async function batchMoveFilesInIndex(context, moveOperations) {
// 确定最终的metadata
let finalMetadata = metadata;
if (finalMetadata === null || finalMetadata === undefined) {
// 如果没有提供新metadata尝试从KV中获取
// 如果没有提供新metadata尝试从数据库中获取
try {
const fileData = await getDatabase(env).getWithMetadata(newFileId);
const fileData = await db.getWithMetadata(newFileId);
finalMetadata = fileData.metadata || {};
} catch (error) {
console.warn(`Failed to get metadata for new file ${newFileId}:`, error);
@@ -273,7 +290,7 @@ export async function batchMoveFilesInIndex(context, moveOperations) {
* @returns {Object} 合并结果
*/
export async function mergeOperationsToIndex(context, options = {}) {
const { waitUntil } = context;
const { request } = context;
const { cleanupAfterMerge = true } = options;
try {
@@ -290,8 +307,11 @@ export async function mergeOperationsToIndex(context, options = {}) {
}
// 获取所有待处理的操作
const operations = await getAllPendingOperations(context, currentIndex.lastOperationId);
const operationsResult = await getAllPendingOperations(context, currentIndex.lastOperationId);
const operations = operationsResult.operations;
const isALLOperations = operationsResult.isAll;
if (operations.length === 0) {
console.log('No pending operations to merge');
return {
@@ -301,7 +321,7 @@ export async function mergeOperationsToIndex(context, options = {}) {
};
}
console.log(`Found ${operations.length} pending operations to merge`);
console.log(`Found ${operations.length} pending operations to merge. Is all operations: ${isALLOperations}, if there are remaining operations they will be processed in the next merge.`);
// 按时间戳排序操作,确保按正确顺序应用
operations.sort((a, b) => a.timestamp - b.timestamp);
@@ -379,8 +399,8 @@ export async function mergeOperationsToIndex(context, options = {}) {
workingIndex.lastOperationId = processedOperationIds[processedOperationIds.length - 1];
}
// 保存更新后的索引元数据
const saveSuccess = await saveIndexMetadata(context, workingIndex);
// 保存更新后的索引(使用分块格式)
const saveSuccess = await saveChunkedIndex(context, workingIndex);
if (!saveSuccess) {
console.error('Failed to save chunked index');
return {
@@ -394,7 +414,23 @@ export async function mergeOperationsToIndex(context, options = {}) {
// 清理已处理的操作记录
if (cleanupAfterMerge && processedOperationIds.length > 0) {
waitUntil(cleanupOperations(context, processedOperationIds));
await cleanupOperations(context, processedOperationIds);
}
// 如果未处理完所有操作,调用 merge-operations API 递归处理
if (!isALLOperations) {
console.log('There are remaining operations, will process them in subsequent calls.');
const headers = new Headers(request.headers);
const originUrl = new URL(request.url);
const mergeUrl = `${originUrl.protocol}//${originUrl.host}/api/manage/list?action=merge-operations`;
await fetch(mergeUrl, { method: 'GET', headers });
return {
success: false,
error: 'There are remaining operations, will process them in subsequent calls.'
};
}
const result = {
@@ -447,46 +483,19 @@ export async function readIndex(context, options = {}) {
// 处理目录满足无头有尾的格式,根目录为空
const dirPrefix = directory === '' || directory.endsWith('/') ? directory : directory + '/';
// 直接从数据库读取文件,不依赖索引
const { env } = context;
const db = getDatabase(env);
let allFiles = [];
let cursor = null;
// 分批获取所有文件
while (true) {
const response = await db.listFiles({
limit: 1000,
cursor: cursor
});
if (!response || !response.keys || !Array.isArray(response.keys)) {
break;
}
for (const item of response.keys) {
// 跳过管理相关的键和分块数据
if (item.name.startsWith('manage@') || item.name.startsWith('chunk_')) {
continue;
}
// 跳过没有元数据的文件
if (!item.metadata || !item.metadata.TimeStamp) {
continue;
}
allFiles.push({
id: item.name,
metadata: item.metadata
});
}
cursor = response.cursor;
if (!cursor) break;
// 处理挂起的操作
const mergeResult = await mergeOperationsToIndex(context);
if (!mergeResult.success) {
throw new Error('Failed to merge operations: ' + mergeResult.error);
}
let filteredFiles = allFiles;
// 获取当前索引
const index = await getIndex(context);
if (!index.success) {
throw new Error('Failed to get index');
}
let filteredFiles = index.files;
// 目录过滤
if (directory) {
@@ -565,7 +574,7 @@ export async function readIndex(context, options = {}) {
files: resultFiles,
directories: Array.from(directories),
totalCount: totalCount,
indexLastUpdated: Date.now(),
indexLastUpdated: index.lastUpdated,
returnedCount: resultFiles.length,
success: true
};
@@ -584,12 +593,13 @@ export async function readIndex(context, options = {}) {
}
/**
* 重建索引(从 KV 中的所有文件重新构建索引)
* 重建索引(从数据库中的所有文件重新构建索引)
* @param {Object} context - 上下文对象
* @param {Function} progressCallback - 进度回调函数
*/
export async function rebuildIndex(context, progressCallback = null) {
const { env, waitUntil } = context;
const db = getDatabase(env);
try {
console.log('Starting index rebuild...');
@@ -603,24 +613,30 @@ export async function rebuildIndex(context, progressCallback = null) {
lastOperationId: null
};
// 从D1数据库读取所有文件
const db = getDatabase(env);
const filesStmt = db.db.prepare('SELECT id, metadata FROM files WHERE timestamp IS NOT NULL ORDER BY timestamp DESC');
const fileResults = await filesStmt.all();
// 分批读取所有文件
while (true) {
const response = await db.list({
limit: KV_LIST_LIMIT,
cursor: cursor
});
for (const row of fileResults) {
try {
const metadata = JSON.parse(row.metadata || '{}');
cursor = response.cursor;
// 跳过没有时间戳的文件
if (!metadata.TimeStamp) {
for (const item of response.keys) {
// 跳过管理相关的键
if (item.name.startsWith('manage@') || item.name.startsWith('chunk_')) {
continue;
}
// 跳过没有元数据的文件
if (!item.metadata || !item.metadata.TimeStamp) {
continue;
}
// 构建文件索引项
const fileItem = {
id: row.id,
metadata: metadata
id: item.name,
metadata: item.metadata || {}
};
newIndex.files.push(fileItem);
@@ -630,9 +646,12 @@ export async function rebuildIndex(context, progressCallback = null) {
if (progressCallback && processedCount % 100 === 0) {
progressCallback(processedCount);
}
} catch (error) {
console.warn(`Failed to parse metadata for file ${row.id}:`, error);
}
if (!cursor) break;
// 添加协作点
await new Promise(resolve => setTimeout(resolve, 10));
}
// 按时间戳倒序排序
@@ -640,8 +659,8 @@ export async function rebuildIndex(context, progressCallback = null) {
newIndex.totalCount = newIndex.files.length;
// 保存新索引元数据
const saveSuccess = await saveIndexMetadata(context, newIndex);
// 保存新索引(使用分块格式)
const saveSuccess = await saveChunkedIndex(context, newIndex);
if (!saveSuccess) {
console.error('Failed to save chunked index during rebuild');
return {
@@ -677,65 +696,23 @@ export async function rebuildIndex(context, progressCallback = null) {
*/
export async function getIndexInfo(context) {
try {
// 直接从数据库读取文件信息,不依赖索引
const { env } = context;
const db = getDatabase(env);
const index = await getIndex(context);
let allFiles = [];
let cursor = null;
// 分批获取所有文件
while (true) {
const response = await db.listFiles({
limit: 1000,
cursor: cursor
});
if (!response || !response.keys || !Array.isArray(response.keys)) {
break;
}
for (const item of response.keys) {
// 跳过管理相关的键和分块数据
if (item.name.startsWith('manage@') || item.name.startsWith('chunk_')) {
continue;
}
// 跳过没有元数据的文件
if (!item.metadata || !item.metadata.TimeStamp) {
continue;
}
allFiles.push({
id: item.name,
metadata: item.metadata
});
}
cursor = response.cursor;
if (!cursor) break;
}
// 如果没有文件,返回空结果
if (allFiles.length === 0) {
// 检查索引是否成功获取
if (index.success === false) {
return {
success: true,
totalFiles: 0,
lastUpdated: Date.now(),
channelStats: {},
directoryStats: {},
typeStats: {},
oldestFile: null,
newestFile: null
};
success: false,
error: 'Failed to retrieve index',
message: 'Index is not available or corrupted'
}
}
// 统计各渠道文件数量
const channelStats = {};
const directoryStats = {};
const typeStats = {};
allFiles.forEach(file => {
index.files.forEach(file => {
// 渠道统计
let channel = file.metadata.Channel || 'Telegraph';
if (channel === 'TelegramNew') {
@@ -756,31 +733,15 @@ export async function getIndexInfo(context) {
typeStats[listType] = (typeStats[listType] || 0) + 1;
});
// 找到最新和最旧的文件
let oldestFile = null;
let newestFile = null;
if (allFiles.length > 0) {
// 按时间戳排序
const sortedFiles = [...allFiles].sort((a, b) => {
const timeA = a.metadata.TimeStamp || 0;
const timeB = b.metadata.TimeStamp || 0;
return timeA - timeB;
});
oldestFile = sortedFiles[0];
newestFile = sortedFiles[sortedFiles.length - 1];
}
return {
success: true,
totalFiles: allFiles.length,
lastUpdated: Date.now(),
totalFiles: index.totalCount,
lastUpdated: index.lastUpdated,
channelStats,
directoryStats,
typeStats,
oldestFile,
newestFile
oldestFile: index.files[index.files.length - 1],
newestFile: index.files[0]
};
} catch (error) {
console.error('Error getting index info:', error);
@@ -807,6 +768,7 @@ function generateOperationId() {
*/
async function recordOperation(context, type, data) {
const { env } = context;
const db = getDatabase(env);
const operationId = generateOperationId();
const operation = {
@@ -814,9 +776,9 @@ async function recordOperation(context, type, data) {
timestamp: Date.now(),
data
};
const db = getDatabase(env);
await db.putIndexOperation(operationId, operation);
const operationKey = OPERATION_KEY_PREFIX + operationId;
await db.put(operationKey, JSON.stringify(operation));
return operationId;
}
@@ -828,29 +790,59 @@ async function recordOperation(context, type, data) {
*/
async function getAllPendingOperations(context, lastOperationId = null) {
const { env } = context;
const db = getDatabase(env);
const operations = [];
let cursor = null;
try {
const db = getDatabase(env);
const allOperations = await db.listIndexOperations({
processed: false,
limit: 10000 // 获取所有未处理的操作
});
// 如果指定了lastOperationId过滤已处理的操作
for (const operation of allOperations) {
if (lastOperationId && operation.id <= lastOperationId) {
continue;
let cursor = null;
const MAX_OPERATION_COUNT = 30; // 单次获取的最大操作数量
let isALL = true; // 是否获取了所有操作
let operationCount = 0;
try {
while (true) {
const response = await db.list({
prefix: OPERATION_KEY_PREFIX,
limit: KV_LIST_LIMIT,
cursor: cursor
});
for (const item of response.keys) {
// 如果指定了lastOperationId跳过已处理的操作
if (lastOperationId && item.name <= OPERATION_KEY_PREFIX + lastOperationId) {
continue;
}
if (operationCount >= MAX_OPERATION_COUNT) {
isALL = false; // 达到最大操作数量,停止获取
break;
}
try {
const operationData = await db.get(item.name);
if (operationData) {
const operation = JSON.parse(operationData);
operation.id = item.name.substring(OPERATION_KEY_PREFIX.length);
operations.push(operation);
operationCount++;
}
} catch (error) {
isALL = false;
console.warn(`Failed to parse operation ${item.name}:`, error);
}
}
operations.push(operation);
cursor = response.cursor;
if (!cursor || operationCount >= MAX_OPERATION_COUNT) break;
}
} catch (error) {
console.error('Error getting pending operations:', error);
}
return operations;
return {
operations,
isAll: isALL,
}
}
/**
@@ -1015,32 +1007,45 @@ function applyBatchMoveOperation(index, data) {
}
/**
* 清理已处理的操作记录
* 并发清理指定的原子操作记录
* @param {Object} context - 上下文对象
* @param {Array} operationIds - 要清理的操作ID数组
* @param {number} concurrency - 并发数量默认为10
*/
async function cleanupOperations(context, operationIds, concurrency = 10) {
const { env } = context;
const db = getDatabase(env);
try {
console.log(`Cleaning up ${operationIds.length} processed operations with concurrency ${concurrency}...`);
let deletedCount = 0;
let errorCount = 0;
// 创建删除任务数组
const deleteTasks = operationIds.map(operationId => {
const operationKey = OPERATION_KEY_PREFIX + operationId;
return async () => {
try {
await getDatabase(env).delete(operationKey);
await db.delete(operationKey);
deletedCount++;
} catch (error) {
console.error(`Error deleting operation ${operationId}:`, error);
errorCount++;
}
};
});
// 使用并发控制执行删除操作
await promiseLimit(deleteTasks, concurrency);
console.log(`Successfully cleaned up ${operationIds.length} operations`);
console.log(`Successfully cleaned up ${deletedCount} operations, ${errorCount} operations failed.`);
return {
success: true,
deletedCount: deletedCount,
errorCount: errorCount,
};
} catch (error) {
console.error('Error cleaning up operations:', error);
}
@@ -1052,7 +1057,8 @@ async function cleanupOperations(context, operationIds, concurrency = 10) {
* @returns {Object} 删除结果 { success, deletedCount, errors?, totalFound? }
*/
export async function deleteAllOperations(context) {
const { env } = context;
const { request, env } = context;
const db = getDatabase(env);
try {
console.log('Starting to delete all atomic operations...');
@@ -1064,18 +1070,12 @@ export async function deleteAllOperations(context) {
// 首先收集所有操作键
while (true) {
const response = await getDatabase(env).list({
const response = await db.list({
prefix: OPERATION_KEY_PREFIX,
limit: KV_LIST_LIMIT,
cursor: cursor
});
// 检查响应格式
if (!response || !response.keys || !Array.isArray(response.keys)) {
console.error('Invalid response from database list in cleanupProcessedOperations:', response);
break;
}
for (const item of response.keys) {
allOperationIds.push(item.name.substring(OPERATION_KEY_PREFIX.length));
totalFound++;
@@ -1096,11 +1096,31 @@ export async function deleteAllOperations(context) {
}
console.log(`Found ${totalFound} atomic operations to delete`);
// 批量删除原子操作
await cleanupOperations(context, allOperationIds);
console.log(`Delete all operations completed`);
// 限制单次删除的数量
const MAX_DELETE_BATCH = 40;
const toDeleteOperationIds = allOperationIds.slice(0, MAX_DELETE_BATCH);
// 批量删除原子操作
const cleanupResult = await cleanupOperations(context, toDeleteOperationIds);
// 剩余未删除的操作,调用 delete-operations API 进行递归删除
if (allOperationIds.length > MAX_DELETE_BATCH || cleanupResult.errorCount > 0) {
console.warn(`Too many operations (${allOperationIds.length}), only deleting first ${cleanupResult.deletedCount}. The remaining operations will be deleted in subsequent calls.`);
// 复制请求头,用于鉴权
const headers = new Headers(request.headers);
const originUrl = new URL(request.url);
const deleteUrl = `${originUrl.protocol}//${originUrl.host}/api/manage/list?action=delete-operations`
await fetch(deleteUrl, {
method: 'GET',
headers: headers
});
} else {
console.log(`Delete all operations completed`);
}
} catch (error) {
console.error('Error deleting all operations:', error);
@@ -1116,8 +1136,8 @@ export async function deleteAllOperations(context) {
async function getIndex(context) {
const { waitUntil } = context;
try {
// 首先尝试加载索引
const index = await loadIndexFromDatabase(context);
// 首先尝试加载分块索引
const index = await loadChunkedIndex(context);
if (index.success) {
return index;
} else {
@@ -1225,71 +1245,100 @@ async function promiseLimit(tasks, concurrency = BATCH_SIZE) {
}
/**
* 保存分块索引到KV存储
* 保存分块索引到数据库
* @param {Object} context - 上下文对象,包含 env
* @param {Object} index - 完整的索引对象
* @returns {Promise<boolean>} 是否保存成功
*/
async function saveIndexMetadata(context, index) {
async function saveChunkedIndex(context, index) {
const { env } = context;
const db = getDatabase(env);
try {
const db = getDatabase(env);
// 保存索引元数据到index_metadata表
const stmt = db.db.prepare(`
INSERT OR REPLACE INTO index_metadata (key, last_updated, total_count, last_operation_id)
VALUES (?, ?, ?, ?)
`);
await stmt.bind(
'main_index',
index.lastUpdated,
index.totalCount,
index.lastOperationId
).run();
console.log(`Saved index metadata: ${index.totalCount} total files, last updated: ${index.lastUpdated}`);
const files = index.files || [];
const chunks = [];
// 将文件数组分块
for (let i = 0; i < files.length; i += INDEX_CHUNK_SIZE) {
const chunk = files.slice(i, i + INDEX_CHUNK_SIZE);
chunks.push(chunk);
}
// 保存索引元数据
const metadata = {
lastUpdated: index.lastUpdated,
totalCount: index.totalCount,
lastOperationId: index.lastOperationId,
chunkCount: chunks.length,
chunkSize: INDEX_CHUNK_SIZE
};
await db.put(INDEX_META_KEY, JSON.stringify(metadata));
// 保存各个分块
const savePromises = chunks.map((chunk, chunkId) => {
const chunkKey = `${INDEX_KEY}_${chunkId}`;
return db.put(chunkKey, JSON.stringify(chunk));
});
await Promise.all(savePromises);
console.log(`Saved chunked index: ${chunks.length} chunks, ${files.length} total files`);
return true;
} catch (error) {
console.error('Error saving index metadata:', error);
console.error('Error saving chunked index:', error);
return false;
}
}
/**
* 从D1数据库加载索引
* 从数据库加载分块索引
* @param {Object} context - 上下文对象,包含 env
* @returns {Promise<Object>} 完整的索引对象
*/
async function loadIndexFromDatabase(context) {
async function loadChunkedIndex(context) {
const { env } = context;
const db = getDatabase(env);
try {
const db = getDatabase(env);
// 首先获取元数据
const metadataStmt = db.db.prepare('SELECT * FROM index_metadata WHERE key = ?');
const metadata = await metadataStmt.bind('main_index').first();
if (!metadata) {
const metadataStr = await db.get(INDEX_META_KEY);
if (!metadataStr) {
throw new Error('Index metadata not found');
}
// 从files表直接查询所有文件
const filesStmt = db.db.prepare('SELECT id, metadata FROM files ORDER BY timestamp DESC');
const fileResults = await filesStmt.all();
const files = fileResults.map(row => ({
id: row.id,
metadata: JSON.parse(row.metadata || '{}')
}));
const metadata = JSON.parse(metadataStr);
const files = [];
// 并行加载所有分块
const loadPromises = [];
for (let chunkId = 0; chunkId < metadata.chunkCount; chunkId++) {
const chunkKey = `${INDEX_KEY}_${chunkId}`;
loadPromises.push(
db.get(chunkKey).then(chunkStr => {
if (chunkStr) {
return JSON.parse(chunkStr);
}
return [];
})
);
}
const chunks = await Promise.all(loadPromises);
// 合并所有分块
chunks.forEach(chunk => {
if (Array.isArray(chunk)) {
files.push(...chunk);
}
});
const index = {
files,
lastUpdated: metadata.last_updated,
totalCount: metadata.total_count,
lastOperationId: metadata.last_operation_id,
lastUpdated: metadata.lastUpdated,
totalCount: metadata.totalCount,
lastOperationId: metadata.lastOperationId,
success: true
};
@@ -1317,12 +1366,13 @@ async function loadIndexFromDatabase(context) {
*/
export async function clearChunkedIndex(context, onlyNonUsed = false) {
const { env } = context;
const db = getDatabase(env);
try {
console.log('Starting chunked index cleanup...');
// 获取元数据
const metadataStr = await getDatabase(env).get(INDEX_META_KEY);
const metadataStr = await db.get(INDEX_META_KEY);
let chunkCount = 0;
if (metadataStr) {
@@ -1331,7 +1381,7 @@ export async function clearChunkedIndex(context, onlyNonUsed = false) {
if (!onlyNonUsed) {
// 删除元数据
await getDatabase(env).delete(INDEX_META_KEY).catch(() => {});
await db.delete(INDEX_META_KEY).catch(() => {});
}
}
@@ -1339,18 +1389,12 @@ export async function clearChunkedIndex(context, onlyNonUsed = false) {
const recordedChunks = []; // 现有的索引分块键
let cursor = null;
while (true) {
const response = await getDatabase(env).list({
const response = await db.list({
prefix: INDEX_KEY,
limit: KV_LIST_LIMIT,
cursor: cursor
});
// 检查响应格式
if (!response || !response.keys || !Array.isArray(response.keys)) {
console.error('Invalid response from database list in getIndexStorageStats:', response);
break;
}
for (const item of response.keys) {
recordedChunks.push(item.name);
}
@@ -1375,13 +1419,13 @@ export async function clearChunkedIndex(context, onlyNonUsed = false) {
}
deletePromises.push(
getDatabase(env).delete(chunkKey).catch(() => {})
db.delete(chunkKey).catch(() => {})
);
}
if (recordedChunks.includes(INDEX_KEY)) {
deletePromises.push(
getDatabase(env).delete(INDEX_KEY).catch(() => {})
db.delete(INDEX_KEY).catch(() => {})
);
}
@@ -1403,10 +1447,11 @@ export async function clearChunkedIndex(context, onlyNonUsed = false) {
*/
export async function getIndexStorageStats(context) {
const { env } = context;
const db = getDatabase(env);
try {
// 获取元数据
const metadataStr = await getDatabase(env).get(INDEX_META_KEY);
const metadataStr = await db.get(INDEX_META_KEY);
if (!metadataStr) {
return {
success: false,
@@ -1422,7 +1467,7 @@ export async function getIndexStorageStats(context) {
for (let chunkId = 0; chunkId < metadata.chunkCount; chunkId++) {
const chunkKey = `${INDEX_KEY}_${chunkId}`;
chunkChecks.push(
getDatabase(env).get(chunkKey).then(data => ({
db.get(chunkKey).then(data => ({
chunkId,
exists: !!data,
size: data ? data.length : 0

View File

@@ -1,6 +1,7 @@
import sentryPlugin from "@cloudflare/pages-plugin-sentry";
import '@sentry/tracing';
import { fetchOthersConfig } from "./sysConfig";
import { checkDatabaseConfig as checkDbConfig } from './databaseAdapter.js';
let disableTelemetry = false;
@@ -111,12 +112,9 @@ async function fetchSampleRate(context) {
}
}
import { checkDatabaseConfig as checkDbConfig } from './databaseAdapter.js';
// 检查数据库是否配置,文件索引是否存在
async function checkDatabaseConfigMiddleware(context) {
// 检查数据库是否配置
export async function checkDatabaseConfig(context) {
var env = context.env;
var waitUntil = context.waitUntil;
var dbConfig = checkDbConfig(env);
@@ -138,8 +136,4 @@ async function checkDatabaseConfigMiddleware(context) {
// 继续执行
return await context.next();
}
// 保持向后兼容性的别名
export const checkKVConfig = checkDatabaseConfigMiddleware;
export const checkDatabaseConfig = checkDatabaseConfigMiddleware;
}

View File

@@ -35,8 +35,7 @@ export async function validateApiToken(request, db, requiredPermission) {
return { valid: false, error: '无效的Token' };
}
// 检查权限
// 如果不需要特定权限requiredPermission为null则只要token有效就通过
// 检查权限如果不需要特定权限requiredPermission为null则只要token有效就通过
if (requiredPermission !== null && !permissions.includes(requiredPermission)) {
return { valid: false, error: `缺少${requiredPermission}权限` };
}

View File

@@ -1,4 +0,0 @@
[[d1_databases]]
binding = "DB"
database_name = "imgbed-database"
database_id = "your-database-id"