diff --git a/.cursorignore b/.cursorignore index 6f9f00f..56662bb 100644 --- a/.cursorignore +++ b/.cursorignore @@ -1 +1,2 @@ # Add directories or file patterns to ignore during indexing (e.g. foo/ or *.csv) +# *.env \ No newline at end of file diff --git a/.gitignore b/.gitignore index edd0df0..8a7451d 100644 --- a/.gitignore +++ b/.gitignore @@ -37,4 +37,5 @@ npm-debug.log* packages/db/generated -volumes \ No newline at end of file +volumes +uploads \ No newline at end of file diff --git a/README.md b/README.md index 7fd254f..047a08a 100644 --- a/README.md +++ b/README.md @@ -4,22 +4,164 @@ This template is for creating a monorepo with Turborepo, shadcn/ui, tailwindcss v4, and react v19. +## 项目结构 + +``` +├── apps/ +│ ├── backend/ # Hono 后端应用 +│ └── web/ # Next.js 前端应用 +├── packages/ +│ ├── db/ # Prisma 数据库包 +│ ├── storage/ # 存储解决方案包 +│ ├── tus/ # TUS 上传协议包 +│ └── ui/ # UI 组件包 +└── docs/ # 文档 +``` + +## 特性 + +- 🚀 **现代技术栈**: Next.js 15, React 19, Hono, Prisma +- 📦 **Monorepo**: 使用 Turborepo 管理多包项目 +- 🎨 **UI 组件**: shadcn/ui + TailwindCSS v4 +- 📤 **文件上传**: 支持 TUS 协议的可恢复上传 +- 💾 **多存储支持**: 本地存储 + S3 兼容存储 +- 🗄️ **数据库**: PostgreSQL + Prisma ORM +- 🔄 **实时通信**: WebSocket 支持 + +## 快速开始 + +### 1. 安装依赖 + +```bash +pnpm install +``` + +### 2. 环境变量配置 + +复制环境变量模板并配置: + +```bash +cp .env.example .env +``` + +#### 存储配置 + +**本地存储(开发环境推荐):** + +```bash +STORAGE_TYPE=local +UPLOAD_DIR=./uploads +``` + +**S3 存储(生产环境推荐):** + +```bash +STORAGE_TYPE=s3 +S3_BUCKET=your-bucket-name +S3_REGION=us-east-1 +S3_ACCESS_KEY_ID=your-access-key +S3_SECRET_ACCESS_KEY=your-secret-key +``` + +**MinIO 本地开发:** + +```bash +STORAGE_TYPE=s3 +S3_BUCKET=uploads +S3_ENDPOINT=http://localhost:9000 +S3_ACCESS_KEY_ID=minioadmin +S3_SECRET_ACCESS_KEY=minioadmin +S3_FORCE_PATH_STYLE=true +``` + +详细的环境变量配置请参考:[环境变量配置指南](./docs/ENVIRONMENT.md) + +### 3. 数据库设置 + +```bash +# 生成 Prisma 客户端 +pnpm db:generate + +# 运行数据库迁移 +pnpm db:migrate + +# 填充种子数据(可选) +pnpm db:seed +``` + +### 4. 启动开发服务器 + +```bash +pnpm dev +``` + +这将启动: + +- 前端应用: http://localhost:3001 +- 后端 API: http://localhost:3000 +- 文件上传: http://localhost:3000/upload +- 存储管理 API: http://localhost:3000/api/storage + +## 存储包 (@repo/storage) + +项目包含一个功能完整的存储解决方案包,支持: + +### 核心功能 + +- 🗂️ **多存储后端**: 本地文件系统、AWS S3、MinIO、阿里云 OSS、腾讯云 COS +- 📤 **TUS 上传**: 支持可恢复的大文件上传 +- 🔧 **Hono 集成**: 提供即插即用的中间件 +- 📊 **文件管理**: 完整的文件生命周期管理 +- ⏰ **自动清理**: 过期文件自动清理机制 +- 🔄 **存储迁移**: 支持不同存储类型间的数据迁移 + +### API 端点 + +```bash +# 文件资源管理 +GET /api/storage/resources # 获取所有资源 +GET /api/storage/resource/:fileId # 获取文件信息 +DELETE /api/storage/resource/:id # 删除资源 + +# 文件访问和下载 +GET /download/:fileId # 文件下载和访问(支持所有存储类型) + +# 统计和管理 +GET /api/storage/stats # 获取统计信息 +POST /api/storage/cleanup # 清理过期文件 +POST /api/storage/migrate-storage # 迁移存储类型 + +# 文件上传 (TUS 协议) +POST /upload # 开始上传 +PATCH /upload/:id # 续传文件 +HEAD /upload/:id # 获取上传状态 +``` + +### 使用示例 + +```typescript +import { createStorageApp, startCleanupScheduler } from '@repo/storage'; + +// 创建存储应用 +const storageApp = createStorageApp({ + apiBasePath: '/api/storage', + uploadPath: '/upload', +}); + +// 挂载到主应用 +app.route('/', storageApp); + +// 启动清理调度器 +startCleanupScheduler(); +``` + ## One-click Deploy You can deploy this template to Vercel with the button below: -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?build-command=cd+..%2F..%2F+%26%26+pnpm+turbo+build+--filter%3Dweb...&demo-description=This+is+a+template+Turborepo+with+ShadcnUI+tailwindv4&demo-image=%2F%2Fimages.ctfassets.net%2Fe5382hct74si%2F2JxNyYATuuV7WPuJ31kF9Q%2F433990aa4c8e7524a9095682fb08f0b1%2FBasic.png&demo-title=Turborepo+%26+Next.js+Starter&demo-url=https%3A%2F%2Fexamples-basic-web.vercel.sh%2F&from=templates&project-name=Turborepo+%26+Next.js+Starter&repository-name=turborepo-shadcn-tailwind&repository-url=https%3A%2F%2Fgithub.com%2Flinkb15%2Fturborepo-shadcn-ui-tailwind-4&root-directory=apps%2Fweb&skippable-integrations=1) +[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?build-command=cd+..%2F..%2F+%26%26+pnpm+turbo+build+--filter%3Dweb...&demo-description=This+is+a+template+Turborepo+with+ShadcnUI+tailwindv4&demo-image=%2F%2Fimages.ctfassets.net%2Fe5382hct74si%2F2JxNyYATuuV7WPuJ31kF9Q%2F433990aa4c8e7524a9095682fb08f0b1%2FBasic.png&demo-title=Turborepo+%26+Next.js+Starter&demo-url=https%3A%2F%2Fexamples-basic-web.vercel.sh%2F&from=templates&project-name=Turborepo+%26+Next.js+Starter&repository-name=turborepo-shadcn-tailwind&repository-url=https%3A%2F%2Flinkb15%2Fturborepo-shadcn-ui-tailwind-4&root-directory=apps%2Fweb&skippable-integrations=1) -## Usage - -in the root directory run: - -```bash -pnpm install -pnpm dev -``` - -## Adding components +## 添加 UI 组件 To add components to your app, run the following command at the root of your `web` app: @@ -33,7 +175,7 @@ This will place the ui components in the `packages/ui/src/components` directory. Your `globals.css` are already set up to use the components from the `ui` package which is imported in the `web` app. -## Using components +## 使用组件 To use the components in your app, import them from the `ui` package. @@ -41,11 +183,44 @@ To use the components in your app, import them from the `ui` package. import { Button } from '@repo/ui/components/ui/button'; ``` +## 脚本命令 + +```bash +# 开发 +pnpm dev # 启动所有应用 +pnpm dev:web # 只启动前端 +pnpm dev:backend # 只启动后端 + +# 构建 +pnpm build # 构建所有包 +pnpm build:web # 构建前端 +pnpm build:backend # 构建后端 + +# 数据库 +pnpm db:generate # 生成 Prisma 客户端 +pnpm db:migrate # 运行数据库迁移 +pnpm db:seed # 填充种子数据 +pnpm db:studio # 打开 Prisma Studio + +# 代码质量 +pnpm lint # 代码检查 +pnpm type-check # 类型检查 +pnpm format # 代码格式化 +``` + +## 文档 + +- [环境变量配置指南](./docs/ENVIRONMENT.md) +- [存储包文档](./packages/storage/README.md) +- [文件访问使用指南](./docs/STATIC_FILES.md) + ## More Resources - [shadcn/ui - Monorepo](https://ui.shadcn.com/docs/monorepo) - [Turborepo - shadcn/ui](https://turbo.build/repo/docs/guides/tools/shadcn-ui) - [TailwindCSS v4 - Explicitly Registering Sources](https://tailwindcss.com/docs/detecting-classes-in-source-files#explicitly-registering-sources) +- [Hono Documentation](https://hono.dev/) +- [TUS Protocol](https://tus.io/) [opengraph-image]: https://turborepo-shadcn-tailwind.vercel.app/opengraph-image.png [opengraph-image-url]: https://turborepo-shadcn-tailwind.vercel.app/ diff --git a/apps/backend/package.json b/apps/backend/package.json index 4c7c7d1..05b2536 100644 --- a/apps/backend/package.json +++ b/apps/backend/package.json @@ -1,35 +1,36 @@ { - "name": "backend", - "scripts": { - "dev": "bun run --hot src/index.ts" - }, - "dependencies": { - "@elastic/elasticsearch": "^9.0.2", - "@hono/node-server": "^1.14.3", - "@hono/trpc-server": "^0.3.4", - "@hono/zod-validator": "^0.5.0", - "@repo/db": "workspace:*", - "@repo/oidc-provider": "workspace:*", - "@repo/tus": "workspace:*", - "@trpc/server": "11.1.2", - "dayjs": "^1.11.12", - "hono": "^4.7.10", - "ioredis": "5.4.1", - "jose": "^6.0.11", - "minio": "7.1.3", - "nanoid": "^5.1.5", - "node-cron": "^4.0.7", - "oidc-provider": "^9.1.1", - "superjson": "^2.2.2", - "transliteration": "^2.3.5", - "valibot": "^1.1.0", - "zod": "^3.25.23" - }, - "devDependencies": { - "@types/bun": "latest", - "@types/node": "^22.15.21", - "@types/oidc-provider": "^9.1.0", - "supertest": "^7.1.1", - "vitest": "^3.1.4" - } + "name": "backend", + "scripts": { + "dev": "bun run --hot src/index.ts" + }, + "dependencies": { + "@elastic/elasticsearch": "^9.0.2", + "@hono/node-server": "^1.14.3", + "@hono/trpc-server": "^0.3.4", + "@hono/zod-validator": "^0.5.0", + "@repo/db": "workspace:*", + "@repo/oidc-provider": "workspace:*", + "@repo/tus": "workspace:*", + "@repo/storage": "workspace:*", + "@trpc/server": "11.1.2", + "dayjs": "^1.11.12", + "hono": "^4.7.10", + "ioredis": "5.4.1", + "jose": "^6.0.11", + "minio": "7.1.3", + "nanoid": "^5.1.5", + "node-cron": "^4.0.7", + "oidc-provider": "^9.1.1", + "superjson": "^2.2.2", + "transliteration": "^2.3.5", + "valibot": "^1.1.0", + "zod": "^3.25.23" + }, + "devDependencies": { + "@types/bun": "latest", + "@types/node": "^22.15.21", + "@types/oidc-provider": "^9.1.0", + "supertest": "^7.1.1", + "vitest": "^3.1.4" + } } diff --git a/apps/backend/src/index.ts b/apps/backend/src/index.ts index f5e82bf..4162eeb 100644 --- a/apps/backend/src/index.ts +++ b/apps/backend/src/index.ts @@ -15,11 +15,8 @@ import { wsHandler, wsConfig } from './socket'; // 导入新的路由 import userRest from './user/user.rest'; -import uploadRest from './upload/upload.rest'; -import { startCleanupScheduler } from './upload/scheduler'; - -// 导入OIDC Provider -import { oidcApp } from './oidc'; +// 使用新的 @repo/storage 包 +import { createStorageApp, startCleanupScheduler } from '@repo/storage'; type Env = { Variables: { @@ -59,10 +56,13 @@ app.use( // 添加 REST API 路由 app.route('/api/users', userRest); -app.route('/api/upload', uploadRest); -// 挂载 OIDC Provider -app.route('/oidc', oidcApp); +// 使用新的存储应用,包含API和上传功能 +const storageApp = createStorageApp({ + apiBasePath: '/api/storage', + uploadPath: '/upload', +}); +app.route('/', storageApp); // 添加 WebSocket 路由 app.get('/ws', wsHandler); diff --git a/apps/backend/src/upload/types.ts b/apps/backend/src/upload/types.ts deleted file mode 100644 index 2140ebc..0000000 --- a/apps/backend/src/upload/types.ts +++ /dev/null @@ -1,29 +0,0 @@ -export interface UploadCompleteEvent { - identifier: string; - filename: string; - size: number; - hash: string; - integrityVerified: boolean; -} - -export type UploadEvent = { - uploadStart: { - identifier: string; - filename: string; - totalSize: number; - resuming?: boolean; - }; - uploadComplete: UploadCompleteEvent; - uploadError: { identifier: string; error: string; filename: string }; -}; -export interface UploadLock { - clientId: string; - timestamp: number; -} -// 添加重试机制,处理临时网络问题 -// 实现定期清理过期的临时文件 -// 添加文件完整性校验 -// 实现上传进度持久化,支持服务重启后恢复 -// 添加并发限制,防止系统资源耗尽 -// 实现文件去重功能,避免重复上传 -// 添加日志记录和监控机制 diff --git a/apps/backend/src/upload/upload.rest.ts b/apps/backend/src/upload/upload.rest.ts deleted file mode 100644 index 440e457..0000000 --- a/apps/backend/src/upload/upload.rest.ts +++ /dev/null @@ -1,198 +0,0 @@ -import { Hono } from 'hono'; -import { handleTusRequest, cleanupExpiredUploads, getStorageInfo } from './tus'; -import { - getResourceByFileId, - getAllResources, - deleteResource, - updateResource, - getResourcesByStorageType, - getResourcesByStatus, - getUploadingResources, - getResourceStats, - migrateResourcesStorageType, -} from './upload.index'; -import { StorageManager, StorageType, type StorageConfig } from './storage.adapter'; -import { prisma } from '@repo/db'; - -const uploadRest = new Hono(); - -// 获取文件资源信息 -uploadRest.get('/resource/:fileId', async (c) => { - const fileId = c.req.param('fileId'); - const result = await getResourceByFileId(fileId); - return c.json(result); -}); - -// 获取所有资源 -uploadRest.get('/resources', async (c) => { - const resources = await getAllResources(); - return c.json(resources); -}); - -// 根据存储类型获取资源 -uploadRest.get('/resources/storage/:storageType', async (c) => { - const storageType = c.req.param('storageType') as StorageType; - const resources = await getResourcesByStorageType(storageType); - return c.json(resources); -}); - -// 根据状态获取资源 -uploadRest.get('/resources/status/:status', async (c) => { - const status = c.req.param('status'); - const resources = await getResourcesByStatus(status); - return c.json(resources); -}); - -// 获取正在上传的资源 -uploadRest.get('/resources/uploading', async (c) => { - const resources = await getUploadingResources(); - return c.json(resources); -}); - -// 获取资源统计信息 -uploadRest.get('/stats', async (c) => { - const stats = await getResourceStats(); - return c.json(stats); -}); - -// 删除资源 -uploadRest.delete('/resource/:id', async (c) => { - const id = c.req.param('id'); - const result = await deleteResource(id); - return c.json(result); -}); - -// 更新资源 -uploadRest.patch('/resource/:id', async (c) => { - const id = c.req.param('id'); - const data = await c.req.json(); - const result = await updateResource(id, data); - return c.json(result); -}); - -// 迁移资源存储类型(批量更新数据库中的存储类型标记) -uploadRest.post('/migrate-storage', async (c) => { - try { - const { from, to } = await c.req.json(); - const result = await migrateResourcesStorageType(from as StorageType, to as StorageType); - return c.json({ - success: true, - message: `Migrated ${result.count} resources from ${from} to ${to}`, - count: result.count, - }); - } catch (error) { - console.error('Failed to migrate storage type:', error); - return c.json( - { - success: false, - error: error instanceof Error ? error.message : 'Unknown error', - }, - 400, - ); - } -}); - -// 清理过期上传 -uploadRest.post('/cleanup', async (c) => { - const result = await cleanupExpiredUploads(); - return c.json(result); -}); - -// 手动清理指定状态的资源 -uploadRest.post('/cleanup/by-status', async (c) => { - try { - const { status, olderThanDays } = await c.req.json(); - const cutoffDate = new Date(); - cutoffDate.setDate(cutoffDate.getDate() - (olderThanDays || 30)); - - const deletedResources = await prisma.resource.deleteMany({ - where: { - status, - createdAt: { - lt: cutoffDate, - }, - }, - }); - - return c.json({ - success: true, - message: `Deleted ${deletedResources.count} resources with status ${status}`, - count: deletedResources.count, - }); - } catch (error) { - console.error('Failed to cleanup by status:', error); - return c.json( - { - success: false, - error: error instanceof Error ? error.message : 'Unknown error', - }, - 400, - ); - } -}); - -// 获取存储信息 -uploadRest.get('/storage/info', async (c) => { - const storageInfo = getStorageInfo(); - return c.json(storageInfo); -}); - -// 切换存储类型(需要重启应用) -uploadRest.post('/storage/switch', async (c) => { - try { - const newConfig = (await c.req.json()) as StorageConfig; - const storageManager = StorageManager.getInstance(); - await storageManager.switchStorage(newConfig); - - return c.json({ - success: true, - message: 'Storage configuration updated. Please restart the application for changes to take effect.', - newType: newConfig.type, - }); - } catch (error) { - console.error('Failed to switch storage:', error); - return c.json( - { - success: false, - error: error instanceof Error ? error.message : 'Unknown error', - }, - 400, - ); - } -}); - -// 验证存储配置 -uploadRest.post('/storage/validate', async (c) => { - try { - const config = (await c.req.json()) as StorageConfig; - const { validateStorageConfig } = await import('./storage.adapter'); - const errors = validateStorageConfig(config); - - if (errors.length > 0) { - return c.json({ valid: false, errors }, 400); - } - - return c.json({ valid: true, message: 'Storage configuration is valid' }); - } catch (error) { - return c.json( - { - valid: false, - errors: [error instanceof Error ? error.message : 'Invalid JSON'], - }, - 400, - ); - } -}); - -// TUS 协议处理 - 使用通用处理器 -uploadRest.all('/*', async (c) => { - try { - await handleTusRequest(c.req.raw, c.res); - return new Response(null); - } catch (error) { - console.error('TUS request error:', error); - return c.json({ error: 'Upload request failed' }, 500); - } -}); - -export default uploadRest; diff --git a/apps/backend/src/upload/utils.ts b/apps/backend/src/upload/utils.ts deleted file mode 100644 index a7c189f..0000000 --- a/apps/backend/src/upload/utils.ts +++ /dev/null @@ -1,4 +0,0 @@ -export function extractFileIdFromNginxUrl(url: string) { - const match = url.match(/uploads\/(\d{4}\/\d{2}\/\d{2}\/[^/]+)/); - return match ? match[1] : ''; -} diff --git a/apps/backend/tsconfig.json b/apps/backend/tsconfig.json index 53412b3..ed8e081 100644 --- a/apps/backend/tsconfig.json +++ b/apps/backend/tsconfig.json @@ -4,7 +4,8 @@ "moduleResolution": "bundler", "paths": { "@/*": ["./*"], - "@repo/db/*": ["../../packages/db/src/*"] + "@repo/db/*": ["../../packages/db/src/*"], + "@repo/storage/*": ["../../packages/storage/src/*"] } } } diff --git a/apps/web/app/upload/page.tsx b/apps/web/app/upload/page.tsx new file mode 100644 index 0000000..cef182f --- /dev/null +++ b/apps/web/app/upload/page.tsx @@ -0,0 +1,44 @@ +'use client'; +import { FileUpload } from '../../components/FileUpload'; +import { FileDownload } from '../../components/FileDownload'; +import { AdvancedFileDownload } from '../../components/AdvancedFileDownload'; +import { DownloadTester } from '../../components/DownloadTester'; + +export default function UploadPage() { + return ( +
+
+
+

文件上传和下载中心

+

完整的文件管理解决方案:上传、下载、预览

+
+ + {/* 上传组件 */} +
+

📤 文件上传

+ +
+ + {/* 下载测试组件 */} +
+

🔧 下载测试

+ +
+ +
+ {/* 基础下载组件 */} +
+

📥 基础下载

+ +
+ + {/* 高级下载组件 */} +
+

🚀 高级下载

+ +
+
+
+
+ ); +} diff --git a/apps/web/components/AdvancedFileDownload.tsx b/apps/web/components/AdvancedFileDownload.tsx new file mode 100644 index 0000000..34ceeac --- /dev/null +++ b/apps/web/components/AdvancedFileDownload.tsx @@ -0,0 +1,234 @@ +import React, { useState } from 'react'; +import { useFileDownload } from '../hooks/useFileDownload'; +import { useTusUpload } from '../hooks/useTusUpload'; + +export function AdvancedFileDownload() { + const { getFileInfo } = useTusUpload(); + const { + downloadProgress, + isDownloading, + downloadError, + downloadFile, + downloadFileWithProgress, + previewFile, + copyFileLink, + canPreview, + getFileIcon, + } = useFileDownload(); + + const [fileId, setFileId] = useState(''); + const [fileInfo, setFileInfo] = useState(null); + const [loading, setLoading] = useState(false); + const [error, setError] = useState(null); + + // 获取文件信息 + const handleGetFileInfo = async () => { + if (!fileId.trim()) { + setError('请输入文件ID'); + return; + } + + setLoading(true); + setError(null); + + try { + const info = await getFileInfo(fileId); + if (info) { + setFileInfo(info); + } else { + setError('文件不存在或未准备好'); + } + } catch (err) { + setError('获取文件信息失败'); + } finally { + setLoading(false); + } + }; + + // 简单下载 + const handleSimpleDownload = () => { + downloadFile(fileId, fileInfo?.title); + }; + + // 带进度的下载 + const handleProgressDownload = async () => { + try { + await downloadFileWithProgress(fileId, fileInfo?.title); + } catch (error) { + console.error('Download with progress failed:', error); + } + }; + + // 预览文件 + const handlePreview = () => { + previewFile(fileId); + }; + + // 复制链接 + const handleCopyLink = async () => { + try { + await copyFileLink(fileId); + alert('链接已复制到剪贴板!'); + } catch (error) { + alert('复制失败'); + } + }; + + return ( +
+

高级文件下载

+ + {/* 文件ID输入 */} +
+ +
+ setFileId(e.target.value)} + placeholder="输入文件ID" + className="flex-1 px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500" + /> + +
+
+ + {/* 错误信息 */} + {(error || downloadError) && ( +
+ {error || downloadError} +
+ )} + + {/* 下载进度 */} + {isDownloading && downloadProgress && ( +
+
+ 下载进度 + {downloadProgress.percentage}% +
+
+
+
+
+ {formatFileSize(downloadProgress.loaded)} / {formatFileSize(downloadProgress.total)} +
+
+ )} + + {/* 文件信息 */} + {fileInfo && ( +
+
+ {getFileIcon(fileInfo.type || '')} +
+

{fileInfo.title || '未知文件'}

+

{fileInfo.type || '未知类型'}

+
+
+ +
+
+ 状态: + + {fileInfo.status || '未知'} + +
+ {fileInfo.meta?.size && ( +
+ 大小: {formatFileSize(fileInfo.meta.size)} +
+ )} +
+ 创建时间: {new Date(fileInfo.createdAt).toLocaleString()} +
+
+ 存储类型: {fileInfo.storageType || '未知'} +
+
+
+ )} + + {/* 操作按钮 */} + {fileInfo && ( +
+
+ + + {canPreview(fileInfo.type || '') && ( + + )} + +
+ + {/* 文件预览提示 */} + {canPreview(fileInfo.type || '') && ( +
+

💡 此文件支持在线预览,点击"预览文件"可以在浏览器中直接查看

+
+ )} +
+ )} + + {/* 使用说明 */} +
+

功能说明:

+
    +
  • + • 快速下载:直接通过浏览器下载文件 +
  • +
  • + • 进度下载:显示下载进度,适合大文件 +
  • +
  • + • 预览文件:图片、PDF、视频等可在线预览 +
  • +
  • + • 复制链接:复制文件访问链接到剪贴板 +
  • +
+
+
+ ); +} + +// 格式化文件大小 +function formatFileSize(bytes: number): string { + if (bytes === 0) return '0 Bytes'; + const k = 1024; + const sizes = ['Bytes', 'KB', 'MB', 'GB', 'TB']; + const i = Math.floor(Math.log(bytes) / Math.log(k)); + return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i]; +} diff --git a/apps/web/components/DownloadTester.tsx b/apps/web/components/DownloadTester.tsx new file mode 100644 index 0000000..aa472b2 --- /dev/null +++ b/apps/web/components/DownloadTester.tsx @@ -0,0 +1,90 @@ +import React, { useState } from 'react'; +import { useTusUpload } from '../hooks/useTusUpload'; + +export function DownloadTester() { + const { serverUrl, getFileInfo } = useTusUpload(); + const [fileId, setFileId] = useState('2025/05/28/1mVGC8r6jy'); + const [testResults, setTestResults] = useState(null); + const [loading, setLoading] = useState(false); + + const runTests = async () => { + setLoading(true); + const results: any = { + fileId, + serverUrl, + timestamp: new Date().toISOString(), + }; + + try { + // 测试1: 检查资源信息 + console.log('Testing resource info...'); + const resourceInfo = await getFileInfo(fileId); + results.resourceInfo = resourceInfo; + + // 测试2: 测试下载端点 + console.log('Testing download endpoint...'); + const downloadUrl = `${serverUrl}/download/${fileId}`; + results.downloadUrl = downloadUrl; + + const response = await fetch(downloadUrl, { method: 'HEAD' }); + results.downloadResponse = { + status: response.status, + statusText: response.statusText, + headers: Object.fromEntries(response.headers.entries()), + }; + + // 测试3: 测试API端点 + console.log('Testing API endpoint...'); + const apiUrl = `${serverUrl}/api/storage/resource/${fileId}`; + results.apiUrl = apiUrl; + + const apiResponse = await fetch(apiUrl); + const apiData = await apiResponse.json(); + results.apiResponse = { + status: apiResponse.status, + data: apiData, + }; + } catch (error) { + results.error = error instanceof Error ? error.message : String(error); + } + + setTestResults(results); + setLoading(false); + }; + + return ( +
+

🔧 下载功能测试

+ +
+ +
+ setFileId(e.target.value)} + className="flex-1 px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500" + /> + +
+
+ + {testResults && ( +
+
+

测试结果

+
+							{JSON.stringify(testResults, null, 2)}
+						
+
+
+ )} +
+ ); +} diff --git a/apps/web/components/FileDownload.tsx b/apps/web/components/FileDownload.tsx new file mode 100644 index 0000000..3e9b544 --- /dev/null +++ b/apps/web/components/FileDownload.tsx @@ -0,0 +1,157 @@ +import React, { useState } from 'react'; +import { useTusUpload } from '../hooks/useTusUpload'; + +interface FileDownloadProps { + fileId?: string; + fileName?: string; + className?: string; +} + +export function FileDownload({ fileId, fileName, className }: FileDownloadProps) { + const { getFileUrlByFileId, getFileInfo } = useTusUpload(); + const [inputFileId, setInputFileId] = useState(fileId || ''); + const [fileInfo, setFileInfo] = useState(null); + const [loading, setLoading] = useState(false); + const [error, setError] = useState(null); + + // 获取文件信息 + const handleGetFileInfo = async () => { + if (!inputFileId.trim()) { + setError('请输入文件ID'); + return; + } + + setLoading(true); + setError(null); + + try { + const info = await getFileInfo(inputFileId); + if (info) { + setFileInfo(info); + } else { + setError('文件不存在或未准备好'); + } + } catch (err) { + setError('获取文件信息失败'); + } finally { + setLoading(false); + } + }; + + // 直接下载文件 + const handleDirectDownload = () => { + const downloadUrl = getFileUrlByFileId(inputFileId); + window.open(downloadUrl, '_blank'); + }; + + // 复制下载链接 + const handleCopyLink = async () => { + const downloadUrl = getFileUrlByFileId(inputFileId); + try { + await navigator.clipboard.writeText(downloadUrl); + alert('下载链接已复制到剪贴板!'); + } catch (error) { + console.error('复制失败:', error); + } + }; + + // 在新窗口预览文件 + const handlePreview = () => { + const downloadUrl = getFileUrlByFileId(inputFileId); + window.open(downloadUrl, '_blank'); + }; + + return ( +
+

文件下载

+ + {/* 文件ID输入 */} +
+ +
+ setInputFileId(e.target.value)} + placeholder="输入文件ID" + className="flex-1 px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500" + /> + +
+
+ + {/* 错误信息 */} + {error &&
{error}
} + + {/* 文件信息 */} + {fileInfo && ( +
+

文件信息

+
+

+ 文件名: {fileInfo.title || '未知'} +

+

+ 类型: {fileInfo.type || '未知'} +

+

+ 状态: {fileInfo.status || '未知'} +

+ {fileInfo.meta?.size && ( +

+ 大小: {formatFileSize(fileInfo.meta.size)} +

+ )} +

+ 创建时间: {new Date(fileInfo.createdAt).toLocaleString()} +

+
+
+ )} + + {/* 操作按钮 */} + {inputFileId && ( +
+ + + +
+ )} + + {/* 使用说明 */} +
+

使用说明:

+
    +
  • • 输入文件ID后点击"查询"获取文件信息
  • +
  • • "直接下载"会打开新窗口下载文件
  • +
  • • "预览/查看"适用于图片、PDF等可预览的文件
  • +
  • • "复制链接"将下载地址复制到剪贴板
  • +
+
+
+ ); +} + +// 格式化文件大小 +function formatFileSize(bytes: number): string { + if (bytes === 0) return '0 Bytes'; + const k = 1024; + const sizes = ['Bytes', 'KB', 'MB', 'GB', 'TB']; + const i = Math.floor(Math.log(bytes) / Math.log(k)); + return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i]; +} diff --git a/apps/web/components/FileUpload.tsx b/apps/web/components/FileUpload.tsx new file mode 100644 index 0000000..ce8670d --- /dev/null +++ b/apps/web/components/FileUpload.tsx @@ -0,0 +1,218 @@ +'use client'; +import React, { useCallback, useState } from 'react'; +import { useTusUpload } from '../hooks/useTusUpload'; + +interface UploadedFile { + fileId: string; + fileName: string; + url: string; +} + +export function FileUpload() { + const { uploadProgress, isUploading, uploadError, handleFileUpload, getFileUrlByFileId, serverUrl } = useTusUpload(); + + const [uploadedFiles, setUploadedFiles] = useState([]); + const [dragOver, setDragOver] = useState(false); + + // 处理文件选择 + const handleFileSelect = useCallback( + async (files: FileList | null) => { + if (!files || files.length === 0) return; + + for (let i = 0; i < files.length; i++) { + const file = files[i]; + + try { + const result = await handleFileUpload( + file, + (result) => { + console.log('Upload success:', result); + setUploadedFiles((prev) => [ + ...prev, + { + fileId: result.fileId, + fileName: result.fileName, + url: result.url, + }, + ]); + }, + (error) => { + console.error('Upload error:', error); + }, + ); + } catch (error) { + console.error('Upload failed:', error); + } + } + }, + [handleFileUpload], + ); + + // 处理拖拽上传 + const handleDrop = useCallback( + (e: React.DragEvent) => { + e.preventDefault(); + setDragOver(false); + handleFileSelect(e.dataTransfer.files); + }, + [handleFileSelect], + ); + + const handleDragOver = useCallback((e: React.DragEvent) => { + e.preventDefault(); + setDragOver(true); + }, []); + + const handleDragLeave = useCallback((e: React.DragEvent) => { + e.preventDefault(); + setDragOver(false); + }, []); + + // 处理文件输入 + const handleInputChange = useCallback( + (e: React.ChangeEvent) => { + handleFileSelect(e.target.files); + }, + [handleFileSelect], + ); + + // 复制链接到剪贴板 + const copyToClipboard = useCallback(async (url: string) => { + try { + await navigator.clipboard.writeText(url); + alert('链接已复制到剪贴板!'); + } catch (error) { + console.error('Failed to copy:', error); + } + }, []); + + return ( +
+

文件上传

+ + {/* 服务器信息 */} +
+

+ 服务器地址: {serverUrl} +

+
+ + {/* 拖拽上传区域 */} +
+
+
+ + + +
+
+

拖拽文件到这里,或者

+ +
+

支持多文件上传,TUS 协议支持断点续传

+
+
+ + {/* 上传进度 */} + {isUploading && ( +
+
+ 上传中... + {uploadProgress}% +
+
+
+
+
+ )} + + {/* 错误信息 */} + {uploadError && ( +
+

+ 上传失败: + {uploadError} +

+
+ )} + + {/* 已上传文件列表 */} + {uploadedFiles.length > 0 && ( +
+

已上传文件

+
+ {uploadedFiles.map((file, index) => ( +
+
+
+ + + +
+
+

{file.fileName}

+

文件ID: {file.fileId}

+
+
+
+ + 查看 + + +
+
+ ))} +
+
+ )} + + {/* 使用说明 */} +
+

使用说明:

+
    +
  • • 支持拖拽和点击上传
  • +
  • • 使用 TUS 协议,支持大文件和断点续传
  • +
  • • 上传完成后可以通过链接直接访问文件
  • +
  • • 图片和 PDF 会在浏览器中直接显示
  • +
  • • 其他文件类型会触发下载
  • +
+
+
+ ); +} diff --git a/apps/web/components/SimpleUploadExample.tsx b/apps/web/components/SimpleUploadExample.tsx new file mode 100644 index 0000000..f5eb1d9 --- /dev/null +++ b/apps/web/components/SimpleUploadExample.tsx @@ -0,0 +1,75 @@ +import React, { useState } from 'react'; +import { useTusUpload } from '../hooks/useTusUpload'; + +export function SimpleUploadExample() { + const { uploadProgress, isUploading, uploadError, handleFileUpload, getFileUrlByFileId } = useTusUpload(); + const [uploadedFileUrl, setUploadedFileUrl] = useState(''); + + const handleFileChange = async (e: React.ChangeEvent) => { + const file = e.target.files?.[0]; + if (!file) return; + + try { + const result = await handleFileUpload( + file, + (result) => { + console.log('上传成功!', result); + setUploadedFileUrl(result.url); + }, + (error) => { + console.error('上传失败:', error); + }, + ); + } catch (error) { + console.error('上传出错:', error); + } + }; + + return ( +
+

简单上传示例

+ +
+ +
+ + {isUploading && ( +
+
+ 上传进度 + {uploadProgress}% +
+
+
+
+
+ )} + + {uploadError && ( +
{uploadError}
+ )} + + {uploadedFileUrl && ( +
+

上传成功!

+ + 查看文件 + +
+ )} +
+ ); +} diff --git a/apps/web/docs/UPLOAD_HOOK_USAGE.md b/apps/web/docs/UPLOAD_HOOK_USAGE.md new file mode 100644 index 0000000..c3f27c0 --- /dev/null +++ b/apps/web/docs/UPLOAD_HOOK_USAGE.md @@ -0,0 +1,287 @@ +# TUS 上传 Hook 使用指南 + +## 概述 + +`useTusUpload` 是一个自定义 React Hook,提供了基于 TUS 协议的文件上传功能,支持大文件上传、断点续传、进度跟踪等特性。 + +## 环境变量配置 + +确保在 `.env` 文件中配置了以下环境变量: + +```env +NEXT_PUBLIC_SERVER_PORT=3000 +NEXT_PUBLIC_SERVER_IP=http://localhost +``` + +**注意**:在 Next.js 中,客户端组件只能访问以 `NEXT_PUBLIC_` 开头的环境变量。 + +## Hook API + +### 返回值 + +```typescript +const { + uploadProgress, // 上传进度 (0-100) + isUploading, // 是否正在上传 + uploadError, // 上传错误信息 + handleFileUpload, // 文件上传函数 + getFileUrlByFileId, // 根据文件ID获取访问链接 + getFileInfo, // 获取文件详细信息 + getUploadStatus, // 获取上传状态 + serverUrl, // 服务器地址 +} = useTusUpload(); +``` + +### 主要方法 + +#### `handleFileUpload(file, onSuccess?, onError?)` + +上传文件的主要方法。 + +**参数:** + +- `file: File` - 要上传的文件对象 +- `onSuccess?: (result: UploadResult) => void` - 成功回调 +- `onError?: (error: string) => void` - 失败回调 + +**返回:** `Promise` + +**UploadResult 接口:** + +```typescript +interface UploadResult { + compressedUrl: string; // 压缩版本URL(当前与原始URL相同) + url: string; // 文件访问URL + fileId: string; // 文件唯一标识 + fileName: string; // 文件名 +} +``` + +#### `getFileUrlByFileId(fileId: string)` + +根据文件ID生成访问链接。 + +**参数:** + +- `fileId: string` - 文件唯一标识 + +**返回:** `string` - 文件访问URL + +## 使用示例 + +### 基础使用 + +```tsx +import React, { useState } from 'react'; +import { useTusUpload } from '../hooks/useTusUpload'; + +function UploadComponent() { + const { uploadProgress, isUploading, uploadError, handleFileUpload } = useTusUpload(); + const [uploadedUrl, setUploadedUrl] = useState(''); + + const handleFileChange = async (e: React.ChangeEvent) => { + const file = e.target.files?.[0]; + if (!file) return; + + try { + const result = await handleFileUpload( + file, + (result) => { + console.log('上传成功!', result); + setUploadedUrl(result.url); + }, + (error) => { + console.error('上传失败:', error); + }, + ); + } catch (error) { + console.error('上传出错:', error); + } + }; + + return ( +
+ + + {isUploading && ( +
+

上传进度: {uploadProgress}%

+ +
+ )} + + {uploadError &&

{uploadError}

} + + {uploadedUrl && ( + + 查看上传的文件 + + )} +
+ ); +} +``` + +### 拖拽上传 + +```tsx +import React, { useCallback, useState } from 'react'; +import { useTusUpload } from '../hooks/useTusUpload'; + +function DragDropUpload() { + const { handleFileUpload, isUploading, uploadProgress } = useTusUpload(); + const [dragOver, setDragOver] = useState(false); + + const handleDrop = useCallback( + async (e: React.DragEvent) => { + e.preventDefault(); + setDragOver(false); + + const files = e.dataTransfer.files; + if (files.length > 0) { + await handleFileUpload(files[0]); + } + }, + [handleFileUpload], + ); + + const handleDragOver = useCallback((e: React.DragEvent) => { + e.preventDefault(); + setDragOver(true); + }, []); + + return ( +
setDragOver(false)} + style={{ + border: dragOver ? '2px dashed #0070f3' : '2px dashed #ccc', + padding: '20px', + textAlign: 'center', + backgroundColor: dragOver ? '#f0f8ff' : '#fafafa', + }} + > + {isUploading ?

上传中... {uploadProgress}%

:

拖拽文件到这里上传

} +
+ ); +} +``` + +### 多文件上传 + +```tsx +function MultiFileUpload() { + const { handleFileUpload } = useTusUpload(); + const [uploadingFiles, setUploadingFiles] = useState>(new Map()); + + const handleFilesChange = async (e: React.ChangeEvent) => { + const files = e.target.files; + if (!files) return; + + for (let i = 0; i < files.length; i++) { + const file = files[i]; + const fileId = `${file.name}-${Date.now()}-${i}`; + + setUploadingFiles((prev) => new Map(prev).set(fileId, 0)); + + try { + await handleFileUpload( + file, + (result) => { + console.log(`文件 ${file.name} 上传成功:`, result); + setUploadingFiles((prev) => { + const newMap = new Map(prev); + newMap.delete(fileId); + return newMap; + }); + }, + (error) => { + console.error(`文件 ${file.name} 上传失败:`, error); + setUploadingFiles((prev) => { + const newMap = new Map(prev); + newMap.delete(fileId); + return newMap; + }); + }, + ); + } catch (error) { + console.error(`文件 ${file.name} 上传出错:`, error); + } + } + }; + + return ( +
+ + + {uploadingFiles.size > 0 && ( +
+

正在上传的文件:

+ {Array.from(uploadingFiles.entries()).map(([fileId, progress]) => ( +
+ {fileId}: {progress}% +
+ ))} +
+ )} +
+ ); +} +``` + +## 特性 + +### 1. 断点续传 + +TUS 协议支持断点续传,如果上传过程中断,可以从中断的地方继续上传。 + +### 2. 大文件支持 + +适合上传大文件,没有文件大小限制(取决于服务器配置)。 + +### 3. 进度跟踪 + +实时显示上传进度,提供良好的用户体验。 + +### 4. 错误处理 + +提供详细的错误信息和重试机制。 + +### 5. 自动重试 + +内置重试机制,网络异常时自动重试。 + +## 故障排除 + +### 1. 环境变量获取不到 + +确保环境变量以 `NEXT_PUBLIC_` 开头,并且 Next.js 应用已重启。 + +### 2. 上传失败 + +检查服务器是否正在运行,端口是否正确。 + +### 3. CORS 错误 + +确保后端服务器配置了正确的 CORS 设置。 + +### 4. 文件无法访问 + +确认文件上传成功后,检查返回的 URL 是否正确。 + +## 注意事项 + +1. **Next.js 环境变量**:客户端组件只能访问 `NEXT_PUBLIC_` 前缀的环境变量 +2. **服务器配置**:确保后端服务器支持 TUS 协议 +3. **文件大小**:虽然支持大文件,但要注意服务器和客户端的内存限制 +4. **网络环境**:在网络不稳定的环境下,断点续传功能特别有用 + +## API 路由 + +Hook 会访问以下 API 路由: + +- `POST /upload` - TUS 上传端点 +- `GET /download/:fileId` - 文件下载/访问 +- `GET /api/storage/resource/:fileId` - 获取文件信息 +- `HEAD /upload/:fileId` - 获取上传状态 diff --git a/apps/web/hooks/useFileDownload.ts b/apps/web/hooks/useFileDownload.ts new file mode 100644 index 0000000..4379a4b --- /dev/null +++ b/apps/web/hooks/useFileDownload.ts @@ -0,0 +1,180 @@ +import { useState } from 'react'; +import { useTusUpload } from './useTusUpload'; + +interface DownloadProgress { + loaded: number; + total: number; + percentage: number; +} + +export function useFileDownload() { + const { getFileUrlByFileId, serverUrl } = useTusUpload(); + const [downloadProgress, setDownloadProgress] = useState(null); + const [isDownloading, setIsDownloading] = useState(false); + const [downloadError, setDownloadError] = useState(null); + + // 直接下载文件(浏览器处理) + const downloadFile = (fileId: string, filename?: string) => { + const url = getFileUrlByFileId(fileId); + const link = document.createElement('a'); + link.href = url; + if (filename) { + link.download = filename; + } + link.target = '_blank'; + document.body.appendChild(link); + link.click(); + document.body.removeChild(link); + }; + + // 带进度的文件下载 + const downloadFileWithProgress = async ( + fileId: string, + filename?: string, + onProgress?: (progress: DownloadProgress) => void, + ): Promise => { + return new Promise(async (resolve, reject) => { + setIsDownloading(true); + setDownloadError(null); + setDownloadProgress(null); + + try { + const url = getFileUrlByFileId(fileId); + const response = await fetch(url); + + if (!response.ok) { + throw new Error(`HTTP error! status: ${response.status}`); + } + + const contentLength = response.headers.get('Content-Length'); + const total = contentLength ? parseInt(contentLength, 10) : 0; + let loaded = 0; + + const reader = response.body?.getReader(); + if (!reader) { + throw new Error('Failed to get response reader'); + } + + const chunks: Uint8Array[] = []; + + while (true) { + const { done, value } = await reader.read(); + + if (done) break; + + if (value) { + chunks.push(value); + loaded += value.length; + + const progress = { + loaded, + total, + percentage: total > 0 ? Math.round((loaded / total) * 100) : 0, + }; + + setDownloadProgress(progress); + onProgress?.(progress); + } + } + + // 创建 Blob + const blob = new Blob(chunks); + + // 如果提供了文件名,自动下载 + if (filename) { + const downloadUrl = URL.createObjectURL(blob); + const link = document.createElement('a'); + link.href = downloadUrl; + link.download = filename; + document.body.appendChild(link); + link.click(); + document.body.removeChild(link); + URL.revokeObjectURL(downloadUrl); + } + + setIsDownloading(false); + setDownloadProgress(null); + resolve(blob); + } catch (error) { + const errorMessage = error instanceof Error ? error.message : 'Download failed'; + setDownloadError(errorMessage); + setIsDownloading(false); + setDownloadProgress(null); + reject(new Error(errorMessage)); + } + }); + }; + + // 预览文件(在新窗口打开) + const previewFile = (fileId: string) => { + const url = getFileUrlByFileId(fileId); + window.open(url, '_blank', 'noopener,noreferrer'); + }; + + // 获取文件的 Blob URL(用于预览) + const getFileBlobUrl = async (fileId: string): Promise => { + try { + const blob = await downloadFileWithProgress(fileId); + return URL.createObjectURL(blob); + } catch (error) { + throw new Error('Failed to create blob URL'); + } + }; + + // 复制文件链接到剪贴板 + const copyFileLink = async (fileId: string): Promise => { + try { + const url = getFileUrlByFileId(fileId); + await navigator.clipboard.writeText(url); + } catch (error) { + throw new Error('Failed to copy link'); + } + }; + + // 检查文件是否可以预览(基于 MIME 类型) + const canPreview = (mimeType: string): boolean => { + const previewableTypes = [ + 'image/', // 所有图片 + 'application/pdf', + 'text/', + 'video/', + 'audio/', + ]; + + return previewableTypes.some((type) => mimeType.startsWith(type)); + }; + + // 获取文件类型图标 + const getFileIcon = (mimeType: string): string => { + if (mimeType.startsWith('image/')) return '🖼️'; + if (mimeType.startsWith('video/')) return '🎥'; + if (mimeType.startsWith('audio/')) return '🎵'; + if (mimeType === 'application/pdf') return '📄'; + if (mimeType.startsWith('text/')) return '📝'; + if (mimeType.includes('word')) return '📝'; + if (mimeType.includes('excel') || mimeType.includes('spreadsheet')) return '📊'; + if (mimeType.includes('powerpoint') || mimeType.includes('presentation')) return '📊'; + if (mimeType.includes('zip') || mimeType.includes('rar') || mimeType.includes('archive')) return '📦'; + return '📁'; + }; + + return { + // 状态 + downloadProgress, + isDownloading, + downloadError, + + // 方法 + downloadFile, + downloadFileWithProgress, + previewFile, + getFileBlobUrl, + copyFileLink, + + // 工具函数 + canPreview, + getFileIcon, + getFileUrlByFileId, + serverUrl, + }; +} diff --git a/apps/web/hooks/useTusUpload.ts b/apps/web/hooks/useTusUpload.ts index 85c3d31..11d8c01 100644 --- a/apps/web/hooks/useTusUpload.ts +++ b/apps/web/hooks/useTusUpload.ts @@ -1,7 +1,5 @@ -import { useState } from "react"; -import * as tus from "tus-js-client"; -import { env } from "../env"; -import { getCompressedImageUrl } from "@nice/utils"; +import { useState } from 'react'; +import * as tus from 'tus-js-client'; interface UploadResult { compressedUrl: string; @@ -11,105 +9,146 @@ interface UploadResult { } export function useTusUpload() { - const [uploadProgress, setUploadProgress] = useState< - Record - >({}); - const [isUploading, setIsUploading] = useState(false); + const [uploadProgress, setUploadProgress] = useState(0); + const [isUploading, setIsUploading] = useState(false); const [uploadError, setUploadError] = useState(null); - const getFileId = (url: string) => { - const parts = url.split("/"); - const uploadIndex = parts.findIndex((part) => part === "upload"); - if (uploadIndex === -1 || uploadIndex + 4 >= parts.length) { - throw new Error("Invalid upload URL format"); - } - return parts.slice(uploadIndex + 1, uploadIndex + 5).join("/"); + // 获取服务器配置 + const getServerUrl = () => { + const ip = process.env.NEXT_PUBLIC_SERVER_IP || 'http://localhost'; + const port = process.env.NEXT_PUBLIC_SERVER_PORT || '3000'; + return `${ip}:${port}`; }; - const getResourceUrl = (url: string) => { - const parts = url.split("/"); - const uploadIndex = parts.findIndex((part) => part === "upload"); - if (uploadIndex === -1 || uploadIndex + 4 >= parts.length) { - throw new Error("Invalid upload URL format"); - } - const resUrl = `http://${env.SERVER_IP}:${env.FILE_PORT}/uploads/${parts.slice(uploadIndex + 1, uploadIndex + 6).join("/")}`; - return resUrl; - }; - const handleFileUpload = async ( - file: File | Blob, - onSuccess: (result: UploadResult) => void, - onError: (error: Error) => void, - fileKey: string // 添加文件唯一标识 - ) => { - // console.log() - setIsUploading(true); - setUploadProgress((prev) => ({ ...prev, [fileKey]: 0 })); - setUploadError(null); - try { - // 如果是Blob,需要转换为File - let fileName = "uploaded-file"; - if (file instanceof Blob && !(file instanceof File)) { - // 根据MIME类型设置文件扩展名 - const extension = file.type.split('/')[1]; - fileName = `uploaded-file.${extension}`; - } - const uploadFile = file instanceof Blob && !(file instanceof File) - ? new File([file], fileName, { type: file.type }) - : file as File; - console.log(`http://${env.SERVER_IP}:${env.SERVER_PORT}/upload`); - const upload = new tus.Upload(uploadFile, { - endpoint: `http://${env.SERVER_IP}:${env.SERVER_PORT}/upload`, - retryDelays: [0, 1000, 3000, 5000], + // 文件上传函数 + const handleFileUpload = ( + file: File, + onSuccess?: (result: UploadResult) => void, + onError?: (error: string) => void, + ): Promise => { + return new Promise((resolve, reject) => { + setIsUploading(true); + setUploadProgress(0); + setUploadError(null); + + const serverUrl = getServerUrl(); + const uploadUrl = `${serverUrl}/upload`; + + const upload = new tus.Upload(file, { + endpoint: uploadUrl, + retryDelays: [0, 3000, 5000, 10000, 20000], metadata: { - filename: uploadFile.name, - filetype: uploadFile.type, - size: uploadFile.size as any, - }, - onProgress: (bytesUploaded, bytesTotal) => { - const progress = Number( - ((bytesUploaded / bytesTotal) * 100).toFixed(2) - ); - setUploadProgress((prev) => ({ - ...prev, - [fileKey]: progress, - })); - }, - onSuccess: async (payload) => { - if (upload.url) { - const fileId = getFileId(upload.url); - //console.log(fileId) - const url = getResourceUrl(upload.url); - setIsUploading(false); - setUploadProgress((prev) => ({ - ...prev, - [fileKey]: 100, - })); - onSuccess({ - compressedUrl: getCompressedImageUrl(url), - url, - fileId, - fileName: uploadFile.name, - }); - } + filename: file.name, + filetype: file.type, }, onError: (error) => { - const err = - error instanceof Error - ? error - : new Error("Unknown error"); + console.error('Upload failed:', error); + const errorMessage = error.message || 'Upload failed'; + setUploadError(errorMessage); setIsUploading(false); - setUploadError(error.message); - console.log(error); - onError(err); + onError?.(errorMessage); + reject(new Error(errorMessage)); + }, + onProgress: (bytesUploaded, bytesTotal) => { + const percentage = Math.round((bytesUploaded / bytesTotal) * 100); + setUploadProgress(percentage); + }, + onSuccess: () => { + console.log('Upload completed successfully'); + setIsUploading(false); + setUploadProgress(100); + + // 从上传 URL 中提取目录格式的 fileId + const uploadUrl = upload.url; + if (!uploadUrl) { + const error = 'Failed to get upload URL'; + setUploadError(error); + onError?.(error); + reject(new Error(error)); + return; + } + + // 提取完整的上传ID,然后移除文件名部分得到目录路径 + const fullUploadId = uploadUrl.replace(/^.*\/upload\//, ''); + const fileId = fullUploadId.replace(/\/[^/]+$/, ''); + + console.log('Full upload ID:', fullUploadId); + console.log('Extracted fileId (directory):', fileId); + + const result: UploadResult = { + fileId, + fileName: file.name, + url: getFileUrlByFileId(fileId), + compressedUrl: getFileUrlByFileId(fileId), // 对于简单实现,压缩版本和原版本相同 + }; + + onSuccess?.(result); + resolve(result); }, }); + + // 开始上传 upload.start(); + }); + }; + + // 根据 fileId 获取文件访问链接 + const getFileUrlByFileId = (fileId: string): string => { + const serverUrl = getServerUrl(); + // 对fileId进行URL编码以处理其中的斜杠 + const encodedFileId = encodeURIComponent(fileId); + return `${serverUrl}/download/${encodedFileId}`; + }; + + // 检查文件是否存在并获取详细信息 + const getFileInfo = async (fileId: string) => { + try { + const serverUrl = getServerUrl(); + // 对fileId进行URL编码以处理其中的斜杠 + const encodedFileId = encodeURIComponent(fileId); + const response = await fetch(`${serverUrl}/api/storage/resource/${encodedFileId}`); + const data = await response.json(); + + if (data.status === 'UPLOADED' && data.resource) { + return { + ...data.resource, + url: getFileUrlByFileId(fileId), + }; + } + + console.log('File info response:', data); + return null; } catch (error) { - const err = - error instanceof Error ? error : new Error("Upload failed"); - setIsUploading(false); - setUploadError(err.message); - onError(err); + console.error('Failed to get file info:', error); + return null; + } + }; + + // 获取上传状态 + const getUploadStatus = async (fileId: string) => { + try { + const serverUrl = getServerUrl(); + const response = await fetch(`${serverUrl}/upload/${fileId}`, { + method: 'HEAD', + }); + + if (response.status === 200) { + const uploadLength = response.headers.get('Upload-Length'); + const uploadOffset = response.headers.get('Upload-Offset'); + + return { + isComplete: uploadLength === uploadOffset, + progress: + uploadLength && uploadOffset ? Math.round((parseInt(uploadOffset) / parseInt(uploadLength)) * 100) : 0, + uploadLength: uploadLength ? parseInt(uploadLength) : 0, + uploadOffset: uploadOffset ? parseInt(uploadOffset) : 0, + }; + } + + return null; + } catch (error) { + console.error('Failed to get upload status:', error); + return null; } }; @@ -118,5 +157,9 @@ export function useTusUpload() { isUploading, uploadError, handleFileUpload, + getFileUrlByFileId, + getFileInfo, + getUploadStatus, + serverUrl: getServerUrl(), }; } diff --git a/apps/web/package.json b/apps/web/package.json index 78e928e..cde8104 100644 --- a/apps/web/package.json +++ b/apps/web/package.json @@ -31,6 +31,7 @@ "react": "^19.1.0", "react-dom": "^19.1.0", "superjson": "^2.2.2", + "tus-js-client": "^4.3.1", "valibot": "^1.1.0" }, "devDependencies": { diff --git a/debug-minio.js b/debug-minio.js new file mode 100644 index 0000000..4e2ef0f --- /dev/null +++ b/debug-minio.js @@ -0,0 +1,121 @@ +#!/usr/bin/env node + +/** + * MinIO连接调试脚本 + */ + +const { S3 } = require('@aws-sdk/client-s3'); + +async function debugMinIO() { + console.log('🔍 MinIO连接调试开始...\n'); + + const config = { + endpoint: 'http://localhost:9000', + region: 'us-east-1', + credentials: { + accessKeyId: '7Nt7OyHkwIoo3zvSKdnc', + secretAccessKey: 'EZ0cyrjJAsabTLNSqWcU47LURMppBW2kka3LuXzb', + }, + forcePathStyle: true, + }; + + console.log('配置信息:'); + console.log('- Endpoint:', config.endpoint); + console.log('- Region:', config.region); + console.log('- Access Key:', config.credentials.accessKeyId); + console.log('- Force Path Style:', config.forcePathStyle); + console.log(); + + const s3Client = new S3(config); + + try { + // 1. 测试基本连接 + console.log('📡 测试基本连接...'); + const buckets = await s3Client.listBuckets(); + console.log('✅ 连接成功!'); + console.log('📂 现有存储桶:', buckets.Buckets?.map((b) => b.Name) || []); + console.log(); + + // 2. 检查test123存储桶 + const bucketName = 'test123'; + console.log(`🪣 检查存储桶 "${bucketName}"...`); + + try { + await s3Client.headBucket({ Bucket: bucketName }); + console.log(`✅ 存储桶 "${bucketName}" 存在`); + } catch (error) { + if (error.name === 'NotFound') { + console.log(`❌ 存储桶 "${bucketName}" 不存在,正在创建...`); + try { + await s3Client.createBucket({ Bucket: bucketName }); + console.log(`✅ 存储桶 "${bucketName}" 创建成功`); + } catch (createError) { + console.log(`❌ 创建存储桶失败:`, createError.message); + return; + } + } else { + console.log(`❌ 检查存储桶失败:`, error.message); + return; + } + } + + // 3. 测试简单上传 + console.log('\n📤 测试简单上传...'); + const testKey = 'test-file.txt'; + const testContent = 'Hello MinIO!'; + + try { + await s3Client.putObject({ + Bucket: bucketName, + Key: testKey, + Body: testContent, + }); + console.log(`✅ 简单上传成功: ${testKey}`); + } catch (error) { + console.log(`❌ 简单上传失败:`, error.message); + console.log('错误详情:', error); + return; + } + + // 4. 测试分片上传初始化 + console.log('\n🔄 测试分片上传初始化...'); + const multipartKey = 'test-multipart.txt'; + + try { + const multipartUpload = await s3Client.createMultipartUpload({ + Bucket: bucketName, + Key: multipartKey, + }); + console.log(`✅ 分片上传初始化成功: ${multipartUpload.UploadId}`); + + // 立即取消这个分片上传 + await s3Client.abortMultipartUpload({ + Bucket: bucketName, + Key: multipartKey, + UploadId: multipartUpload.UploadId, + }); + console.log('✅ 分片上传取消成功'); + } catch (error) { + console.log(`❌ 分片上传初始化失败:`, error.message); + console.log('错误详情:', error); + if (error.$metadata) { + console.log('HTTP状态码:', error.$metadata.httpStatusCode); + } + return; + } + + console.log('\n🎉 所有测试通过!MinIO配置正确。'); + } catch (error) { + console.log('❌ 连接失败:', error.message); + console.log('错误详情:', error); + + if (error.message.includes('ECONNREFUSED')) { + console.log('\n💡 提示:'); + console.log('- 确保MinIO正在端口9000运行'); + console.log('- 检查docker容器状态: docker ps'); + console.log('- 重启MinIO: docker restart minio-container-name'); + } + } +} + +debugMinIO().catch(console.error); diff --git a/debug-s3.js b/debug-s3.js new file mode 100644 index 0000000..99bcc59 --- /dev/null +++ b/debug-s3.js @@ -0,0 +1,169 @@ +#!/usr/bin/env node + +/** + * S3存储调试脚本 + * 用于快速诊断S3存储连接问题 + */ + +// 检查是否有.env文件,如果有就加载 +try { + require('dotenv').config(); +} catch (e) { + console.log('No dotenv found, using environment variables directly'); +} + +async function debugS3() { + console.log('🔍 S3存储调试开始...\n'); + + // 1. 检查环境变量 + console.log('📋 环境变量检查:'); + const requiredVars = { + STORAGE_TYPE: process.env.STORAGE_TYPE, + S3_BUCKET: process.env.S3_BUCKET, + S3_ACCESS_KEY_ID: process.env.S3_ACCESS_KEY_ID, + S3_SECRET_ACCESS_KEY: process.env.S3_SECRET_ACCESS_KEY, + S3_REGION: process.env.S3_REGION, + S3_ENDPOINT: process.env.S3_ENDPOINT, + }; + + for (const [key, value] of Object.entries(requiredVars)) { + if (key.includes('SECRET')) { + console.log(` ${key}: ${value ? '✅ 已设置' : '❌ 未设置'}`); + } else { + console.log(` ${key}: ${value || '❌ 未设置'}`); + } + } + + if (process.env.STORAGE_TYPE !== 's3') { + console.log('\n❌ STORAGE_TYPE 不是 s3,无法测试S3连接'); + return; + } + + const missingVars = ['S3_BUCKET', 'S3_ACCESS_KEY_ID', 'S3_SECRET_ACCESS_KEY'].filter((key) => !process.env[key]); + + if (missingVars.length > 0) { + console.log(`\n❌ 缺少必要的环境变量: ${missingVars.join(', ')}`); + console.log('请设置这些环境变量后重试'); + return; + } + + console.log('\n✅ 环境变量检查通过\n'); + + // 2. 测试AWS SDK加载 + console.log('📦 加载AWS SDK...'); + try { + const { S3 } = require('@aws-sdk/client-s3'); + console.log('✅ AWS SDK加载成功\n'); + + // 3. 创建S3客户端 + console.log('🔧 创建S3客户端...'); + const config = { + region: process.env.S3_REGION || 'auto', + credentials: { + accessKeyId: process.env.S3_ACCESS_KEY_ID, + secretAccessKey: process.env.S3_SECRET_ACCESS_KEY, + }, + }; + + if (process.env.S3_ENDPOINT) { + config.endpoint = process.env.S3_ENDPOINT; + } + + if (process.env.S3_FORCE_PATH_STYLE === 'true') { + config.forcePathStyle = true; + } + + console.log('S3客户端配置:', { + region: config.region, + endpoint: config.endpoint || '默认AWS端点', + forcePathStyle: config.forcePathStyle || false, + }); + + const s3Client = new S3(config); + console.log('✅ S3客户端创建成功\n'); + + // 4. 测试bucket访问 + console.log('🪣 测试bucket访问...'); + try { + await s3Client.headBucket({ Bucket: process.env.S3_BUCKET }); + console.log('✅ Bucket访问成功'); + } catch (error) { + console.log(`❌ Bucket访问失败: ${error.message}`); + console.log('错误详情:', error); + + if (error.name === 'NotFound') { + console.log(' 💡 提示: Bucket不存在,请检查bucket名称'); + } else if (error.name === 'Forbidden') { + console.log(' 💡 提示: 访问被拒绝,请检查访问密钥权限'); + } else if (error.message.includes('getaddrinfo ENOTFOUND')) { + console.log(' 💡 提示: DNS解析失败,请检查endpoint设置'); + } + return; + } + + // 5. 测试列出对象 + console.log('\n📂 测试列出对象...'); + try { + const result = await s3Client.listObjectsV2({ + Bucket: process.env.S3_BUCKET, + MaxKeys: 5, + }); + console.log(`✅ 列出对象成功,共有 ${result.KeyCount || 0} 个对象`); + + if (result.Contents && result.Contents.length > 0) { + console.log(' 前几个对象:'); + result.Contents.slice(0, 3).forEach((obj, index) => { + console.log(` ${index + 1}. ${obj.Key} (${obj.Size} bytes)`); + }); + } + } catch (error) { + console.log(`❌ 列出对象失败: ${error.message}`); + console.log('错误详情:', error); + return; + } + + // 6. 测试创建multipart upload + console.log('\n🚀 测试创建multipart upload...'); + const testKey = `test-multipart-${Date.now()}`; + let uploadId; + + try { + const createResult = await s3Client.createMultipartUpload({ + Bucket: process.env.S3_BUCKET, + Key: testKey, + Metadata: { test: 'debug-script' }, + }); + uploadId = createResult.UploadId; + console.log(`✅ Multipart upload创建成功,UploadId: ${uploadId}`); + + // 清理测试upload + await s3Client.abortMultipartUpload({ + Bucket: process.env.S3_BUCKET, + Key: testKey, + UploadId: uploadId, + }); + console.log('✅ 测试upload已清理'); + } catch (error) { + console.log(`❌ Multipart upload创建失败: ${error.message}`); + console.log('错误详情:', error); + return; + } + + console.log('\n🎉 S3连接测试全部通过!S3存储应该可以正常工作。'); + console.log('\n💡 如果上传仍然失败,请检查:'); + console.log('1. 网络连接是否稳定'); + console.log('2. 防火墙是否阻止了连接'); + console.log('3. S3服务是否有临时问题'); + console.log('4. 查看应用日志中的详细错误信息'); + } catch (error) { + console.log(`❌ AWS SDK加载失败: ${error.message}`); + console.log('请确保已安装 @aws-sdk/client-s3 包:'); + console.log('npm install @aws-sdk/client-s3'); + } +} + +// 运行调试 +debugS3().catch((error) => { + console.error('调试脚本出错:', error); + process.exit(1); +}); diff --git a/docs/ENVIRONMENT.md b/docs/ENVIRONMENT.md new file mode 100644 index 0000000..8a44db4 --- /dev/null +++ b/docs/ENVIRONMENT.md @@ -0,0 +1,235 @@ +# 环境变量配置指南 + +本文档详细说明了项目中所有环境变量的配置方法和用途。 + +## 存储配置 (@repo/storage) + +### 基础配置 + +```bash +# 存储类型选择 +STORAGE_TYPE=local # 可选值: local | s3 + +# 上传文件过期时间(毫秒),0表示不过期 +UPLOAD_EXPIRATION_MS=0 +``` + +### 本地存储配置 + +当 `STORAGE_TYPE=local` 时需要配置: + +```bash +# 本地存储目录路径 +UPLOAD_DIR=./uploads +``` + +### S3 存储配置 + +当 `STORAGE_TYPE=s3` 时需要配置: + +```bash +# S3 存储桶名称 (必需) +S3_BUCKET=my-app-uploads + +# S3 区域 (必需) +S3_REGION=us-east-1 + +# S3 访问密钥 ID (必需) +S3_ACCESS_KEY_ID=your-access-key-id + +# S3 访问密钥 (必需) +S3_SECRET_ACCESS_KEY=your-secret-access-key + +# 自定义 S3 端点 (可选,用于 MinIO、阿里云 OSS 等) +S3_ENDPOINT= + +# 是否强制使用路径样式 (可选) +S3_FORCE_PATH_STYLE=false + +# 分片上传大小,单位字节 (可选,默认 8MB) +S3_PART_SIZE=8388608 + +# 最大并发上传数 (可选) +S3_MAX_CONCURRENT_UPLOADS=60 +``` + +## 配置示例 + +### 开发环境 - 本地存储 + +```bash +# .env.development +STORAGE_TYPE=local +UPLOAD_DIR=./uploads +``` + +### 生产环境 - AWS S3 + +```bash +# .env.production +STORAGE_TYPE=s3 +S3_BUCKET=prod-app-uploads +S3_REGION=us-west-2 +S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE +S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY +``` + +### MinIO 本地开发 + +```bash +# .env.local +STORAGE_TYPE=s3 +S3_BUCKET=uploads +S3_REGION=us-east-1 +S3_ACCESS_KEY_ID=minioadmin +S3_SECRET_ACCESS_KEY=minioadmin +S3_ENDPOINT=http://localhost:9000 +S3_FORCE_PATH_STYLE=true +``` + +### 阿里云 OSS + +```bash +# .env.aliyun +STORAGE_TYPE=s3 +S3_BUCKET=my-oss-bucket +S3_REGION=oss-cn-hangzhou +S3_ACCESS_KEY_ID=your-access-key-id +S3_SECRET_ACCESS_KEY=your-access-key-secret +S3_ENDPOINT=https://oss-cn-hangzhou.aliyuncs.com +S3_FORCE_PATH_STYLE=false +``` + +### 腾讯云 COS + +```bash +# .env.tencent +STORAGE_TYPE=s3 +S3_BUCKET=my-cos-bucket-1234567890 +S3_REGION=ap-beijing +S3_ACCESS_KEY_ID=your-secret-id +S3_SECRET_ACCESS_KEY=your-secret-key +S3_ENDPOINT=https://cos.ap-beijing.myqcloud.com +S3_FORCE_PATH_STYLE=false +``` + +## 其他配置 + +### 数据库配置 + +```bash +# PostgreSQL 数据库连接字符串 +DATABASE_URL="postgresql://username:password@localhost:5432/database" +``` + +### Redis 配置 + +```bash +# Redis 连接字符串 +REDIS_URL="redis://localhost:6379" +``` + +### 应用配置 + +```bash +# 应用端口 +PORT=3000 + +# 应用环境 +NODE_ENV=development + +# CORS 允许的源 +CORS_ORIGIN=http://localhost:3001 +``` + +## 安全注意事项 + +1. **敏感信息保护**: + + - 永远不要将包含敏感信息的 `.env` 文件提交到版本控制系统 + - 使用 `.env.example` 文件作为模板 + +2. **生产环境**: + + - 使用环境变量管理服务(如 AWS Secrets Manager、Azure Key Vault) + - 定期轮换访问密钥 + +3. **权限控制**: + - S3 存储桶应配置适当的访问策略 + - 使用最小权限原则 + +## 验证配置 + +可以使用以下 API 端点验证存储配置: + +```bash +# 验证存储配置 +curl -X POST http://localhost:3000/api/storage/storage/validate \ + -H "Content-Type: application/json" \ + -d '{ + "type": "s3", + "s3": { + "bucket": "my-bucket", + "region": "us-east-1", + "accessKeyId": "your-key", + "secretAccessKey": "your-secret" + } + }' + +# 获取当前存储信息 +curl http://localhost:3000/api/storage/storage/info +``` + +## 文件访问 + +### 统一下载接口 + +无论使用哪种存储类型,都通过统一的下载接口访问文件: + +```bash +# 统一下载接口(推荐) +GET http://localhost:3000/download/2024/01/01/abc123/example.jpg +``` + +### 本地存储 + +当使用本地存储时: + +- 下载接口会直接读取本地文件并返回 +- 支持内联显示(图片、PDF等)和下载 + +### S3 存储 + +当使用 S3 存储时: + +- 下载接口会重定向到 S3 URL +- 也可以直接访问 S3 URL(如果存储桶是公开的) + +```bash +# 直接访问 S3 URL +GET https://bucket.s3.region.amazonaws.com/2024/01/01/abc123/example.jpg +``` + +### 文件 URL 生成 + +```typescript +import { StorageUtils } from '@repo/storage'; + +const storageUtils = StorageUtils.getInstance(); + +// 生成下载 URL(推荐方式) +const fileUrl = storageUtils.generateFileUrl('file-id'); +// 结果: http://localhost:3000/download/file-id + +// 生成完整的公开访问 URL +const publicUrl = storageUtils.generateFileUrl('file-id', 'https://yourdomain.com'); +// 结果: https://yourdomain.com/download/file-id + +// 生成 S3 直接访问 URL(仅 S3 存储) +try { + const directUrl = storageUtils.generateDirectUrl('file-id'); + // 结果: https://bucket.s3.region.amazonaws.com/file-id +} catch (error) { + // 本地存储会抛出错误 +} +``` diff --git a/docs/STATIC_FILES.md b/docs/STATIC_FILES.md new file mode 100644 index 0000000..5b78f9d --- /dev/null +++ b/docs/STATIC_FILES.md @@ -0,0 +1,279 @@ +# 文件访问使用指南 + +本文档说明如何使用 `@repo/storage` 包提供的文件访问功能。 + +## 功能概述 + +存储包提供统一的文件访问接口: + +- **统一下载接口** (`/download/:fileId`) - 适用于所有存储类型,提供统一的文件访问 + +## 使用方法 + +### 1. 基础配置 + +```typescript +import { createStorageApp } from '@repo/storage'; + +// 创建包含所有功能的存储应用 +const storageApp = createStorageApp({ + apiBasePath: '/api/storage', // API 管理接口 + uploadPath: '/upload', // TUS 上传接口 + downloadPath: '/download', // 文件下载接口 +}); + +app.route('/', storageApp); +``` + +### 2. 分别配置功能 + +```typescript +import { createStorageRoutes, createTusUploadRoutes, createFileDownloadRoutes } from '@repo/storage'; + +const app = new Hono(); + +// 存储管理 API +app.route('/api/storage', createStorageRoutes()); + +// 文件上传 +app.route('/upload', createTusUploadRoutes()); + +// 文件下载(所有存储类型) +app.route('/download', createFileDownloadRoutes()); +``` + +## 文件访问方式 + +### 统一下载接口 + +无论使用哪种存储类型,都通过统一的下载接口访问文件: + +```bash +# 访问文件(支持内联显示和下载) +GET http://localhost:3000/download/2024/01/01/abc123/image.jpg +GET http://localhost:3000/download/2024/01/01/abc123/document.pdf +``` + +### 本地存储 + +当 `STORAGE_TYPE=local` 时: + +- 下载接口直接读取本地文件 +- 自动设置正确的 Content-Type +- 支持内联显示(`Content-Disposition: inline`) + +### S3 存储 + +当 `STORAGE_TYPE=s3` 时: + +- 下载接口重定向到 S3 URL +- 也可以直接访问 S3 URL(如果存储桶是公开的) + +```bash +# 直接访问 S3 URL(如果存储桶是公开的) +GET https://bucket.s3.region.amazonaws.com/2024/01/01/abc123/file.jpg +``` + +## 代码示例 + +### 生成文件访问 URL + +```typescript +import { StorageUtils } from '@repo/storage'; + +const storageUtils = StorageUtils.getInstance(); + +// 生成文件访问 URL +function getFileUrl(fileId: string) { + // 结果: http://localhost:3000/download/2024/01/01/abc123/file.jpg + return storageUtils.generateFileUrl(fileId); +} + +// 生成完整的公开访问 URL +function getPublicFileUrl(fileId: string) { + // 结果: https://yourdomain.com/download/2024/01/01/abc123/file.jpg + return storageUtils.generateFileUrl(fileId, 'https://yourdomain.com'); +} + +// 生成 S3 直接访问 URL(仅 S3 存储) +function getDirectUrl(fileId: string) { + try { + // S3 存储: https://bucket.s3.region.amazonaws.com/2024/01/01/abc123/file.jpg + return storageUtils.generateDirectUrl(fileId); + } catch (error) { + // 本地存储会抛出错误,使用下载接口 + return storageUtils.generateFileUrl(fileId); + } +} +``` + +### 在 React 组件中使用 + +```tsx +import { useState, useEffect } from 'react'; + +function FileDisplay({ fileId }: { fileId: string }) { + const [fileUrl, setFileUrl] = useState(''); + + useEffect(() => { + // 获取文件访问 URL + fetch(`/api/storage/resource/${fileId}`) + .then((res) => res.json()) + .then((data) => { + if (data.status === 'ready' && data.resource) { + // 生成文件访问 URL + const url = `/download/${fileId}`; + setFileUrl(url); + } + }); + }, [fileId]); + + if (!fileUrl) return
Loading...
; + + return ( +
+ {/* 图片会内联显示 */} + Uploaded file + + {/* 下载链接 */} + + 下载文件 + + + {/* PDF 等文档可以在新窗口打开 */} + + 在新窗口打开 + +
+ ); +} +``` + +### 文件类型处理 + +```typescript +function getFileDisplayUrl(fileId: string, mimeType: string) { + const baseUrl = `/download/${fileId}`; + + // 根据文件类型决定显示方式 + if (mimeType.startsWith('image/')) { + // 图片直接显示 + return baseUrl; + } else if (mimeType === 'application/pdf') { + // PDF 可以内联显示 + return baseUrl; + } else { + // 其他文件类型强制下载 + return `${baseUrl}?download=true`; + } +} +``` + +## 安全考虑 + +### 1. 访问控制 + +如需要权限验证,可以添加认证中间件: + +```typescript +import { createFileDownloadRoutes } from '@repo/storage'; + +const app = new Hono(); + +// 添加认证中间件 +app.use('/download/*', async (c, next) => { + // 检查用户权限 + const token = c.req.header('Authorization'); + if (!isValidToken(token)) { + return c.json({ error: 'Unauthorized' }, 401); + } + await next(); +}); + +// 添加文件下载服务 +app.route('/download', createFileDownloadRoutes()); +``` + +### 2. 文件类型限制 + +```typescript +app.use('/download/*', async (c, next) => { + const fileId = c.req.param('fileId'); + + // 从数据库获取文件信息 + const { resource } = await getResourceByFileId(fileId); + if (!resource) { + return c.json({ error: 'File not found' }, 404); + } + + // 检查文件类型 + const allowedTypes = ['image/jpeg', 'image/png', 'application/pdf']; + if (!allowedTypes.includes(resource.mimeType)) { + return c.json({ error: 'File type not allowed' }, 403); + } + + await next(); +}); +``` + +## 性能优化 + +### 1. 缓存设置 + +```typescript +app.use('/download/*', async (c, next) => { + await next(); + + // 设置缓存头 + c.header('Cache-Control', 'public, max-age=31536000'); // 1年 + c.header('ETag', generateETag(c.req.path)); +}); +``` + +### 2. CDN 配置 + +对于生产环境,建议使用 CDN: + +```typescript +import { StorageUtils } from '@repo/storage'; + +const storageUtils = StorageUtils.getInstance(); + +// 使用 CDN 域名 +const cdnUrl = 'https://cdn.yourdomain.com'; +const fileUrl = storageUtils.generateFileUrl(fileId, cdnUrl); +``` + +## 故障排除 + +### 常见问题 + +1. **404 文件未找到** + + - 检查文件是否存在于数据库 + - 确认文件路径是否正确 + - 检查文件权限(本地存储) + +2. **下载接口不工作** + + - 检查路由配置 + - 确认存储配置正确 + - 查看服务器日志 + +3. **S3 文件无法访问** + - 检查 S3 存储桶权限 + - 确认文件是否上传成功 + - 验证 S3 配置是否正确 + +### 调试方法 + +```bash +# 检查文件是否存在 +curl -I http://localhost:3000/download/2024/01/01/abc123/file.jpg + +# 检查存储配置 +curl http://localhost:3000/api/storage/storage/info + +# 检查文件信息 +curl http://localhost:3000/api/storage/resource/2024/01/01/abc123/file.jpg +``` diff --git a/env.example b/env.example new file mode 100644 index 0000000..76e441e --- /dev/null +++ b/env.example @@ -0,0 +1,71 @@ +# =========================================== +# 存储配置 (@repo/storage) +# =========================================== + +# 存储类型: local | s3 +STORAGE_TYPE=local + +# 上传文件过期时间(毫秒),0表示不过期 +UPLOAD_EXPIRATION_MS=0 + +# =========================================== +# 本地存储配置 (当 STORAGE_TYPE=local 时) +# =========================================== + +# 本地存储目录路径 +UPLOAD_DIR=./uploads + +# =========================================== +# S3 存储配置 (当 STORAGE_TYPE=s3 时) +# =========================================== + +# S3 存储桶名称 (必需) +S3_BUCKET= + +# S3 区域 (必需) +S3_REGION=us-east-1 + +# S3 访问密钥 ID (必需) +S3_ACCESS_KEY_ID= + +# S3 访问密钥 (必需) +S3_SECRET_ACCESS_KEY= + +# 自定义 S3 端点 (可选,用于 MinIO、阿里云 OSS 等) +S3_ENDPOINT= + +# 是否强制使用路径样式 (可选) +S3_FORCE_PATH_STYLE=false + +# 分片上传大小,单位字节 (可选,默认 8MB) +S3_PART_SIZE=8388608 + +# 最大并发上传数 (可选) +S3_MAX_CONCURRENT_UPLOADS=60 + +# =========================================== +# 数据库配置 +# =========================================== + +# 数据库连接字符串 +DATABASE_URL="postgresql://username:password@localhost:5432/database" + +# =========================================== +# Redis 配置 +# =========================================== + +# Redis 连接字符串 +REDIS_URL="redis://localhost:6379" + +# =========================================== +# 应用配置 +# =========================================== + +# 应用端口 +PORT=3000 + +# 应用环境 +NODE_ENV=development + +# CORS 允许的源 +CORS_ORIGIN=http://localhost:3001 \ No newline at end of file diff --git a/package.json b/package.json index ddbb3cb..4d09265 100644 --- a/package.json +++ b/package.json @@ -11,10 +11,11 @@ "devDependencies": { "@repo/eslint-config": "workspace:*", "@repo/typescript-config": "workspace:*", + "@types/node": "^20", + "dotenv": "16.4.5", "prettier": "^3.5.3", "turbo": "^2.5.3", - "typescript": "5.8.3", - "@types/node": "^20" + "typescript": "5.8.3" }, "packageManager": "pnpm@9.12.3", "engines": { diff --git a/packages/storage/.env.example b/packages/storage/.env.example new file mode 100644 index 0000000..8260781 --- /dev/null +++ b/packages/storage/.env.example @@ -0,0 +1,8 @@ + +STORAGE_TYPE=s3 +UPLOAD_DIR=/opt/projects/nice/uploads +S3_ENDPOINT=https://s3.tebi.io +S3_REGION=auto +S3_BUCKET=d503-space-yeast-station +S3_ACCESS_KEY_ID=CDlX2J6cTgblOsZX +S3_SECRET_ACCESS_KEY=CujF9oIHAxWVF25UY9BtbI6iP6jqGZEE7Y6YCRNs \ No newline at end of file diff --git a/packages/storage/README.md b/packages/storage/README.md new file mode 100644 index 0000000..e55a4f1 --- /dev/null +++ b/packages/storage/README.md @@ -0,0 +1,322 @@ +# @repo/storage + +一个完全兼容 Hono 的存储解决方案,支持本地存储和 S3 兼容存储,提供 TUS 协议上传、文件管理和 REST API。 + +## 特性 + +- 🚀 **多存储支持**: 支持本地文件系统和 S3 兼容存储 +- 📤 **TUS 协议**: 支持可恢复的文件上传 +- 🔧 **Hono 集成**: 提供开箱即用的 Hono 中间件 +- 📊 **文件管理**: 完整的文件生命周期管理 +- 🗄️ **数据库集成**: 与 Prisma 深度集成 +- ⏰ **自动清理**: 支持过期文件自动清理 +- 🔄 **存储迁移**: 支持不同存储类型间的数据迁移 + +## 安装 + +```bash +npm install @repo/storage +``` + +## 环境变量配置 + +### 基础配置 + +| 变量名 | 类型 | 默认值 | 描述 | +| ---------------------- | --------------- | ------- | ------------------------------------- | +| `STORAGE_TYPE` | `local` \| `s3` | `local` | 存储类型选择 | +| `UPLOAD_EXPIRATION_MS` | `number` | `0` | 上传文件过期时间(毫秒),0表示不过期 | + +### 本地存储配置 + +当 `STORAGE_TYPE=local` 时需要配置: + +| 变量名 | 类型 | 默认值 | 描述 | +| ------------ | -------- | ----------- | ---------------- | +| `UPLOAD_DIR` | `string` | `./uploads` | 本地存储目录路径 | + +### S3 存储配置 + +当 `STORAGE_TYPE=s3` 时需要配置: + +| 变量名 | 类型 | 默认值 | 描述 | 必需 | +| --------------------------- | --------- | ----------- | ---------------------------------- | ---- | +| `S3_BUCKET` | `string` | - | S3 存储桶名称 | ✅ | +| `S3_REGION` | `string` | `us-east-1` | S3 区域 | ✅ | +| `S3_ACCESS_KEY_ID` | `string` | - | S3 访问密钥 ID | ✅ | +| `S3_SECRET_ACCESS_KEY` | `string` | - | S3 访问密钥 | ✅ | +| `S3_ENDPOINT` | `string` | - | 自定义 S3 端点(用于兼容其他服务) | ❌ | +| `S3_FORCE_PATH_STYLE` | `boolean` | `false` | 是否强制使用路径样式 | ❌ | +| `S3_PART_SIZE` | `number` | `8388608` | 分片上传大小(8MB) | ❌ | +| `S3_MAX_CONCURRENT_UPLOADS` | `number` | `60` | 最大并发上传数 | ❌ | + +## 配置示例 + +### 本地存储配置 + +```bash +# .env +STORAGE_TYPE=local +UPLOAD_DIR=./uploads +``` + +### AWS S3 配置 + +```bash +# .env +STORAGE_TYPE=s3 +S3_BUCKET=my-app-uploads +S3_REGION=us-west-2 +S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE +S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY +``` + +### MinIO 配置 + +```bash +# .env +STORAGE_TYPE=s3 +S3_BUCKET=uploads +S3_REGION=us-east-1 +S3_ACCESS_KEY_ID=minioadmin +S3_SECRET_ACCESS_KEY=minioadmin +S3_ENDPOINT=http://localhost:9000 +S3_FORCE_PATH_STYLE=true +``` + +### 阿里云 OSS 配置 + +```bash +# .env +STORAGE_TYPE=s3 +S3_BUCKET=my-oss-bucket +S3_REGION=oss-cn-hangzhou +S3_ACCESS_KEY_ID=your-access-key-id +S3_SECRET_ACCESS_KEY=your-access-key-secret +S3_ENDPOINT=https://oss-cn-hangzhou.aliyuncs.com +S3_FORCE_PATH_STYLE=false +``` + +### 腾讯云 COS 配置 + +```bash +# .env +STORAGE_TYPE=s3 +S3_BUCKET=my-cos-bucket-1234567890 +S3_REGION=ap-beijing +S3_ACCESS_KEY_ID=your-secret-id +S3_SECRET_ACCESS_KEY=your-secret-key +S3_ENDPOINT=https://cos.ap-beijing.myqcloud.com +S3_FORCE_PATH_STYLE=false +``` + +## 快速开始 + +### 1. 基础使用 + +```typescript +import { createStorageApp, startCleanupScheduler } from '@repo/storage'; +import { Hono } from 'hono'; + +const app = new Hono(); + +// 创建存储应用 +const storageApp = createStorageApp({ + apiBasePath: '/api/storage', // API 路径 + uploadPath: '/upload', // 上传路径 +}); + +// 挂载存储应用 +app.route('/', storageApp); + +// 启动清理调度器 +startCleanupScheduler(); +``` + +### 2. 分别使用 API 和上传功能 + +```typescript +import { createStorageRoutes, createTusUploadRoutes } from '@repo/storage'; + +const app = new Hono(); + +// 只添加存储管理 API +app.route('/api/storage', createStorageRoutes()); + +// 只添加文件上传功能 +app.route('/upload', createTusUploadRoutes()); +``` + +### 3. 使用存储管理器 + +```typescript +import { StorageManager, StorageUtils } from '@repo/storage'; + +// 获取存储管理器实例 +const storageManager = StorageManager.getInstance(); + +// 获取存储信息 +const storageInfo = storageManager.getStorageInfo(); +console.log('当前存储类型:', storageInfo.type); + +// 使用存储工具 +const storageUtils = StorageUtils.getInstance(); + +// 生成文件访问 URL(统一使用下载接口) +const fileUrl = storageUtils.generateFileUrl('2024/01/01/abc123/file.jpg'); +// 结果: http://localhost:3000/download/2024/01/01/abc123/file.jpg + +// 生成完整的公开访问 URL +const publicUrl = storageUtils.generateFileUrl('2024/01/01/abc123/file.jpg', 'https://yourdomain.com'); +// 结果: https://yourdomain.com/download/2024/01/01/abc123/file.jpg + +// 生成 S3 直接访问 URL(仅 S3 存储) +try { + const directUrl = storageUtils.generateDirectUrl('2024/01/01/abc123/file.jpg'); + // S3 存储: https://bucket.s3.region.amazonaws.com/2024/01/01/abc123/file.jpg +} catch (error) { + // 本地存储会抛出错误 +} + +// 检查文件是否存在 +const exists = await storageUtils.fileExists('file-id'); +``` + +### 4. 分别配置不同功能 + +```typescript +import { createStorageRoutes, createTusUploadRoutes, createFileDownloadRoutes } from '@repo/storage'; + +const app = new Hono(); + +// 只添加存储管理 API +app.route('/api/storage', createStorageRoutes()); + +// 只添加文件上传功能 +app.route('/upload', createTusUploadRoutes()); + +// 只添加文件下载功能(所有存储类型) +app.route('/download', createFileDownloadRoutes()); +``` + +## API 端点 + +### 文件资源管理 + +- `GET /api/storage/resource/:fileId` - 获取文件资源信息 +- `GET /api/storage/resources` - 获取所有资源 +- `GET /api/storage/resources/storage/:storageType` - 按存储类型获取资源 +- `GET /api/storage/resources/status/:status` - 按状态获取资源 +- `GET /api/storage/resources/uploading` - 获取正在上传的资源 +- `DELETE /api/storage/resource/:id` - 删除资源 +- `PATCH /api/storage/resource/:id` - 更新资源 + +### 文件访问和下载 + +- `GET /download/:fileId` - 文件下载和访问(支持所有存储类型) + +### 统计和管理 + +- `GET /api/storage/stats` - 获取资源统计信息 +- `POST /api/storage/cleanup` - 手动清理过期上传 +- `POST /api/storage/cleanup/by-status` - 按状态清理资源 +- `POST /api/storage/migrate-storage` - 迁移存储类型 + +### 存储配置 + +- `GET /api/storage/storage/info` - 获取存储信息 +- `POST /api/storage/storage/switch` - 切换存储配置 +- `POST /api/storage/storage/validate` - 验证存储配置 + +### 文件上传 + +- `POST /upload` - TUS 协议文件上传 +- `PATCH /upload/:id` - 续传文件 +- `HEAD /upload/:id` - 获取上传状态 + +## 数据库操作 + +```typescript +import { + getAllResources, + getResourceByFileId, + createResource, + updateResourceStatus, + deleteResource, +} from '@repo/storage'; + +// 获取所有资源 +const resources = await getAllResources(); + +// 根据文件ID获取资源 +const { status, resource } = await getResourceByFileId('file-id'); + +// 创建新资源 +const newResource = await createResource({ + fileId: 'unique-file-id', + filename: 'example.jpg', + size: 1024000, + mimeType: 'image/jpeg', + storageType: 'local', +}); +``` + +## 文件生命周期 + +1. **上传开始**: 创建资源记录,状态为 `UPLOADING` +2. **上传完成**: 状态更新为 `UPLOADED` +3. **处理中**: 状态可更新为 `PROCESSING` +4. **处理完成**: 状态更新为 `PROCESSED` +5. **清理**: 过期文件自动清理 + +## 存储迁移 + +支持在不同存储类型之间迁移数据: + +```bash +# API 调用示例 +curl -X POST http://localhost:3000/api/storage/migrate-storage \ + -H "Content-Type: application/json" \ + -d '{"from": "local", "to": "s3"}' +``` + +## 安全考虑 + +1. **环境变量**: 敏感信息(如 S3 密钥)应存储在环境变量中 +2. **访问控制**: 建议在生产环境中添加适当的身份验证 +3. **CORS 配置**: 根据需要配置跨域访问策略 +4. **文件验证**: 建议添加文件类型和大小验证 + +## 故障排除 + +### 常见问题 + +1. **找不到模块错误**: 确保已正确安装依赖包 +2. **S3 连接失败**: 检查网络连接和凭据配置 +3. **本地存储权限**: 确保应用有写入本地目录的权限 +4. **上传失败**: 检查文件大小限制和存储空间 + +### 调试模式 + +启用详细日志: + +```bash +DEBUG=storage:* npm start +``` + +## 许可证 + +MIT License + +## 贡献 + +欢迎提交 Issue 和 Pull Request! + +## 更新日志 + +### v2.0.0 + +- 重构为模块化架构 +- 添加完整的 TypeScript 支持 +- 支持多种 S3 兼容服务 +- 改进的错误处理和日志记录 diff --git a/packages/storage/docs/S3_DOWNLOAD_MECHANISM.md b/packages/storage/docs/S3_DOWNLOAD_MECHANISM.md new file mode 100644 index 0000000..ec61e88 --- /dev/null +++ b/packages/storage/docs/S3_DOWNLOAD_MECHANISM.md @@ -0,0 +1,110 @@ +# S3存储下载机制说明 + +## 问题背景 + +在文件上传系统中,我们使用了两种存储类型: + +- **本地存储(Local)**:文件存储在服务器本地文件系统 +- **S3存储(S3)**:文件存储在AWS S3或兼容的对象存储服务中 + +对于文件访问,我们使用了目录格式的 `fileId`,例如:`2025/05/28/RHwt8AkkZp` + +## 存储结构差异 + +### 本地存储 + +- **fileId**:`2025/05/28/RHwt8AkkZp` (目录路径) +- **实际存储**:`/uploads/2025/05/28/RHwt8AkkZp/filename.ext` +- **下载方式**:扫描目录,找到实际文件,返回文件流 + +### S3存储 + +- **fileId**:`2025/05/28/RHwt8AkkZp` (目录路径) +- **S3 Key**:`2025/05/28/RHwt8AkkZp/filename.ext` (完整对象路径) +- **下载方式**:重定向到S3 URL + +## 核心问题 + +S3存储中,对象的完整路径(S3 Key)包含文件名,但我们的 `fileId` 只是目录路径,缺少文件名部分。 + +## 解决方案 + +### 1. 文件名重建策略 + +我们通过以下方式重建完整的S3路径: + +```typescript +const fileName = resource.title || 'file'; +const fullS3Key = `${fileId}/${fileName}`; +``` + +### 2. URL生成逻辑 + +```typescript +// AWS S3 +const s3Url = `https://${bucket}.s3.${region}.amazonaws.com/${fullS3Key}`; + +// 自定义S3兼容服务(如MinIO) +const s3Url = `${endpoint}/${bucket}/${fullS3Key}`; +``` + +### 3. 下载流程 + +1. 从数据库获取文件信息(fileId + resource.title) +2. 重建完整S3 Key:`${fileId}/${fileName}` +3. 生成S3直接访问URL +4. 302重定向到S3 URL,让客户端直接从S3下载 + +## 优势 + +### 性能优势 + +- **302重定向**:避免服务器中转,减少带宽消耗 +- **直接下载**:客户端直接从S3下载,速度更快 +- **CDN友好**:可配合CloudFront等CDN使用 + +### 安全考虑 + +- **公开读取**:需要确保S3 bucket配置了适当的公开读取权限 +- **预签名URL**:未来可扩展支持预签名URL用于私有文件 + +## 局限性 + +### 文件名依赖 + +- 依赖数据库中存储的 `resource.title` 字段 +- 如果文件名不匹配,会导致404错误 + +### 替代方案 + +如果需要更可靠的方案,可以考虑: + +1. **存储完整S3 Key**:在数据库中存储完整的S3对象路径 +2. **S3 ListObjects API**:动态查询S3中的实际对象(会增加API调用成本) + +## 环境配置 + +确保S3配置正确: + +```env +STORAGE_TYPE=s3 +S3_BUCKET=your-bucket-name +S3_REGION=us-east-1 +S3_ACCESS_KEY_ID=your-access-key +S3_SECRET_ACCESS_KEY=your-secret-key +S3_ENDPOINT=https://s3.amazonaws.com # 可选,用于其他S3兼容服务 +``` + +## 测试验证 + +使用以下URL格式测试下载: + +``` +/download/2025%2F05%2F28%2FRHwt8AkkZp +``` + +应该会302重定向到: + +``` +https://your-bucket.s3.us-east-1.amazonaws.com/2025/05/28/RHwt8AkkZp/filename.ext +``` diff --git a/packages/storage/docs/TESTING_S3_SERVICES.md b/packages/storage/docs/TESTING_S3_SERVICES.md new file mode 100644 index 0000000..fc22076 --- /dev/null +++ b/packages/storage/docs/TESTING_S3_SERVICES.md @@ -0,0 +1,189 @@ +# S3存储测试服务推荐 + +## 免费云端S3服务 + +### 1. Tebi (强烈推荐) + +- **免费额度**: 25GB存储 + 250GB传输量(永久免费) +- **网站**: https://tebi.io +- **特点**: + - S3兼容API + - 地理分布式存储 + - 支持FTP/FTPS + - 无需信用卡注册 + - 提供个人助手支持 + +**配置示例**: + +```env +STORAGE_TYPE=s3 +S3_BUCKET=your-bucket-name +S3_REGION=auto +S3_ACCESS_KEY_ID=your-access-key +S3_SECRET_ACCESS_KEY=your-secret-key +S3_ENDPOINT=https://s3.tebi.io +``` + +### 2. Tigris + +- **免费额度**: 有免费层级 +- **网站**: https://www.tigrisdata.com +- **特点**: + - 全球分布式S3兼容存储 + - 零出站费用 + - 针对AI工作负载优化 + +**配置示例**: + +```env +STORAGE_TYPE=s3 +S3_BUCKET=your-bucket-name +S3_REGION=auto +S3_ACCESS_KEY_ID=your-access-key +S3_SECRET_ACCESS_KEY=your-secret-key +S3_ENDPOINT=https://fly.storage.tigris.dev +``` + +### 3. AWS S3 免费套餐 + +- **免费额度**: 5GB存储 + 20,000个GET + 2,000个PUT(12个月) +- **网站**: https://aws.amazon.com/s3/ +- **注意**: 需要信用卡验证 + +**配置示例**: + +```env +STORAGE_TYPE=s3 +S3_BUCKET=your-bucket-name +S3_REGION=us-east-1 +S3_ACCESS_KEY_ID=your-access-key +S3_SECRET_ACCESS_KEY=your-secret-key +# AWS 使用默认endpoint,不需要设置S3_ENDPOINT +``` + +## 本地测试工具 + +### 1. MinIO (推荐本地开发) + +最流行的自托管S3兼容存储 + +**Docker快速启动**: + +```bash +docker run -p 9000:9000 -p 9001:9001 \ + --name minio \ + -e "MINIO_ROOT_USER=minioadmin" \ + -e "MINIO_ROOT_PASSWORD=minioadmin" \ + -v /tmp/minio-data:/data \ + quay.io/minio/minio server /data --console-address ":9001" +``` + +**配置示例**: + +```env +STORAGE_TYPE=s3 +S3_BUCKET=test-bucket +S3_REGION=us-east-1 +S3_ACCESS_KEY_ID=minioadmin +S3_SECRET_ACCESS_KEY=minioadmin +S3_ENDPOINT=http://localhost:9000 +``` + +### 2. S3Mock (Java项目测试) + +轻量级S3模拟服务器 + +**Docker启动**: + +```bash +docker run -p 9090:9090 -p 9191:9191 -t adobe/s3mock +``` + +### 3. LocalStack (完整AWS模拟) + +模拟完整AWS服务栈 + +**Docker启动**: + +```bash +docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack +``` + +## 快速测试步骤 + +### 1. 选择服务并注册 + +推荐从Tebi开始,因为: + +- 免费额度最大 +- 无需信用卡 +- 注册简单 + +### 2. 获取凭据 + +注册后在控制面板获取: + +- Access Key ID +- Secret Access Key +- 存储桶名称 +- Endpoint URL + +### 3. 配置环境变量 + +```bash +export STORAGE_TYPE=s3 +export S3_BUCKET=your-bucket-name +export S3_REGION=auto +export S3_ACCESS_KEY_ID=your-access-key +export S3_SECRET_ACCESS_KEY=your-secret-key +export S3_ENDPOINT=https://s3.tebi.io +``` + +### 4. 测试上传 + +```bash +# 启动你的应用 +npm run dev + +# 测试上传文件 +curl -X POST http://localhost:3000/upload \ + -H "Content-Type: application/json" \ + -d '{"filename": "test.txt", "content": "Hello S3!"}' +``` + +### 5. 验证存储 + +- 登录服务提供商的Web控制台 +- 检查文件是否成功上传 +- 测试下载功能 + +## 测试建议 + +1. **开始小规模测试**: 先上传小文件(< 1MB)验证基本功能 +2. **测试大文件**: 逐步测试更大的文件(10MB, 100MB等) +3. **测试分片上传**: 验证TUS分片上传功能 +4. **测试下载**: 确保文件可以正确下载 +5. **测试权限**: 验证访问控制和安全设置 + +## 故障排除 + +### 常见错误 + +1. **403 Forbidden**: 检查Access Key和Secret是否正确 +2. **404 Not Found**: 确认存储桶名称和endpoint正确 +3. **SSL错误**: 某些服务可能需要设置SSL选项 + +### 调试技巧 + +1. 启用详细日志 +2. 使用AWS CLI工具测试连接 +3. 检查网络连接和防火墙设置 + +## 推荐测试顺序 + +1. **Tebi** - 最容易开始,免费额度大 +2. **MinIO本地** - 完全控制,无网络依赖 +3. **AWS S3** - 最标准的实现,用于最终验证 +4. **Tigris** - 测试现代化特性 + +选择适合你需求的服务开始测试吧! diff --git a/packages/storage/env.example b/packages/storage/env.example new file mode 100644 index 0000000..b3329e0 --- /dev/null +++ b/packages/storage/env.example @@ -0,0 +1,23 @@ +# 存储配置 +STORAGE_TYPE=s3 + +# 本地存储配置 (当 STORAGE_TYPE=local 时使用) +LOCAL_STORAGE_DIRECTORY=./uploads + +# S3/MinIO 存储配置 (当 STORAGE_TYPE=s3 时使用) +S3_ENDPOINT=http://localhost:9000 +S3_REGION=us-east-1 +S3_BUCKET=test123 +# 使用Docker环境变量设置的凭据 +S3_ACCESS_KEY_ID=nice1234 +S3_SECRET_ACCESS_KEY=nice1234 +S3_FORCE_PATH_STYLE=true + +# S3 高级配置 +S3_PART_SIZE=8388608 +S3_MAX_CONCURRENT_UPLOADS=6 + +# 清理配置 +CLEANUP_INCOMPLETE_UPLOADS=true +CLEANUP_SCHEDULE=0 2 * * * +CLEANUP_MAX_AGE_HOURS=24 \ No newline at end of file diff --git a/packages/storage/package.json b/packages/storage/package.json new file mode 100644 index 0000000..cbca008 --- /dev/null +++ b/packages/storage/package.json @@ -0,0 +1,63 @@ +{ + "name": "@repo/storage", + "version": "2.0.0", + "description": "Storage implementation for Hono - 完全兼容 Hono 的 Storage", + "main": "dist/index.js", + "types": "dist/index.d.ts", + "scripts": { + "build": "tsc", + "dev": "tsc --watch", + "clean": "rm -rf dist" + }, + "dependencies": { + "@hono/zod-validator": "^0.5.0", + "@repo/db": "workspace:*", + "@repo/tus": "workspace:*", + "dotenv": "16.4.5", + "hono": "^4.7.10", + "ioredis": "5.4.1", + "jose": "^6.0.11", + "nanoid": "^5.1.5", + "transliteration": "^2.3.5", + "zod": "^3.25.23" + }, + "devDependencies": { + "@types/node": "^22.15.21", + "typescript": "^5.0.0" + }, + "peerDependencies": { + "@repo/db": "workspace:*", + "@repo/tus": "workspace:*", + "hono": "^4.0.0", + "ioredis": "^5.0.0" + }, + "exports": { + ".": { + "types": "./dist/index.d.ts", + "import": "./dist/index.js", + "require": "./dist/index.js" + } + }, + "files": [ + "dist", + "README.md" + ], + "keywords": [ + "storage", + "hono", + "tus", + "upload", + "typescript" + ], + "author": "Your Name", + "license": "MIT", + "repository": { + "type": "git", + "url": "https://github.com/your-org/your-repo.git", + "directory": "packages/storage" + }, + "bugs": { + "url": "https://github.com/your-org/your-repo/issues" + }, + "homepage": "https://github.com/your-org/your-repo/tree/main/packages/storage#readme" +} diff --git a/apps/backend/src/upload/storage.adapter.ts b/packages/storage/src/core/adapter.ts similarity index 69% rename from apps/backend/src/upload/storage.adapter.ts rename to packages/storage/src/core/adapter.ts index 6ba69fd..48b556a 100644 --- a/apps/backend/src/upload/storage.adapter.ts +++ b/packages/storage/src/core/adapter.ts @@ -1,59 +1,54 @@ import { FileStore, S3Store } from '@repo/tus'; import type { DataStore } from '@repo/tus'; - -// 存储类型枚举 -export enum StorageType { - LOCAL = 'local', - S3 = 's3', -} - -// 存储配置接口 -export interface StorageConfig { - type: StorageType; - // 本地存储配置 - local?: { - directory: string; - expirationPeriodInMilliseconds?: number; - }; - // S3 存储配置 - s3?: { - bucket: string; - region: string; - accessKeyId: string; - secretAccessKey: string; - endpoint?: string; // 用于兼容其他 S3 兼容服务 - forcePathStyle?: boolean; - partSize?: number; - maxConcurrentPartUploads?: number; - expirationPeriodInMilliseconds?: number; - }; -} +import { StorageType, StorageConfig } from '../types'; // 从环境变量获取存储配置 export function getStorageConfig(): StorageConfig { const storageType = (process.env.STORAGE_TYPE || 'local') as StorageType; + console.log('=== 存储配置调试信息 ==='); + console.log('STORAGE_TYPE:', process.env.STORAGE_TYPE); + console.log('实际存储类型:', storageType); + console.log('S3_BUCKET:', process.env.S3_BUCKET); + console.log('S3_ACCESS_KEY_ID:', process.env.S3_ACCESS_KEY_ID ? '已设置' : '未设置'); + console.log('S3_SECRET_ACCESS_KEY:', process.env.S3_SECRET_ACCESS_KEY ? '已设置' : '未设置'); + console.log('S3_ENDPOINT:', process.env.S3_ENDPOINT); + console.log('S3_REGION:', process.env.S3_REGION); + console.log('S3_FORCE_PATH_STYLE:', process.env.S3_FORCE_PATH_STYLE); + console.log('========================'); + const config: StorageConfig = { type: storageType, }; if (storageType === StorageType.LOCAL) { + console.log('配置本地存储'); + const directory = process.env.LOCAL_STORAGE_DIRECTORY || process.env.UPLOAD_DIR || './uploads'; + console.log('本地存储目录:', directory); config.local = { - directory: process.env.UPLOAD_DIR || './uploads', - expirationPeriodInMilliseconds: parseInt(process.env.UPLOAD_EXPIRATION_MS || '0'), // 默认不过期 + directory, + expirationPeriodInMilliseconds: 60 * 60 * 24 * 1000, // 默认24小时 }; } else if (storageType === StorageType.S3) { + console.log('配置S3存储'); config.s3 = { - bucket: process.env.S3_BUCKET || '', + bucket: process.env.S3_BUCKET || 'uploads', region: process.env.S3_REGION || 'us-east-1', - accessKeyId: process.env.S3_ACCESS_KEY_ID || '', - secretAccessKey: process.env.S3_SECRET_ACCESS_KEY || '', - endpoint: process.env.S3_ENDPOINT, - forcePathStyle: process.env.S3_FORCE_PATH_STYLE === 'true', + accessKeyId: process.env.S3_ACCESS_KEY_ID || 'minioadmin', + secretAccessKey: process.env.S3_SECRET_ACCESS_KEY || 'minioadmin', + endpoint: process.env.S3_ENDPOINT || 'http://localhost:9000', + forcePathStyle: process.env.S3_FORCE_PATH_STYLE === 'true' || !process.env.S3_FORCE_PATH_STYLE, // MinIO默认需要路径样式 partSize: parseInt(process.env.S3_PART_SIZE || '8388608'), // 8MB - maxConcurrentPartUploads: parseInt(process.env.S3_MAX_CONCURRENT_UPLOADS || '60'), - expirationPeriodInMilliseconds: parseInt(process.env.UPLOAD_EXPIRATION_MS || '0'), // 默认不过期 + maxConcurrentPartUploads: parseInt(process.env.S3_MAX_CONCURRENT_UPLOADS || '6'), + expirationPeriodInMilliseconds: 60 * 60 * 24 * 1000, // 默认24小时 }; + + console.log('S3 配置详情:'); + console.log('- Bucket:', config.s3.bucket || '❌ 未设置'); + console.log('- Region:', config.s3.region); + console.log('- Access Key:', config.s3.accessKeyId ? '✅ 已设置' : '❌ 未设置'); + console.log('- Secret Key:', config.s3.secretAccessKey ? '✅ 已设置' : '❌ 未设置'); + console.log('- Endpoint:', config.s3.endpoint || '使用默认AWS端点'); } return config; @@ -101,6 +96,7 @@ export function createStorageInstance(config: StorageConfig): DataStore { partSize: s3Config.partSize, maxConcurrentPartUploads: s3Config.maxConcurrentPartUploads, expirationPeriodInMilliseconds: s3Config.expirationPeriodInMilliseconds, + useTags: false, // 禁用标签功能,某些S3兼容服务不支持 s3ClientConfig: { bucket: s3Config.bucket, region: s3Config.region, diff --git a/packages/storage/src/core/index.ts b/packages/storage/src/core/index.ts new file mode 100644 index 0000000..095283d --- /dev/null +++ b/packages/storage/src/core/index.ts @@ -0,0 +1,5 @@ +// 存储适配器 +export * from './adapter'; + +// 便捷导出 +export { StorageManager } from './adapter'; diff --git a/packages/storage/src/database/index.ts b/packages/storage/src/database/index.ts new file mode 100644 index 0000000..4811dbb --- /dev/null +++ b/packages/storage/src/database/index.ts @@ -0,0 +1,2 @@ +// 数据库操作 +export * from './operations'; diff --git a/apps/backend/src/upload/upload.index.ts b/packages/storage/src/database/operations.ts similarity index 75% rename from apps/backend/src/upload/upload.index.ts rename to packages/storage/src/database/operations.ts index 83066fc..3cbe6e9 100644 --- a/apps/backend/src/upload/upload.index.ts +++ b/packages/storage/src/database/operations.ts @@ -1,6 +1,6 @@ import { prisma } from '@repo/db'; import type { Resource } from '@repo/db'; -import { StorageType } from './storage.adapter'; +import { StorageType } from '../types'; export async function getResourceByFileId(fileId: string): Promise<{ status: string; resource?: Resource }> { const resource = await prisma.resource.findFirst({ @@ -11,7 +11,10 @@ export async function getResourceByFileId(fileId: string): Promise<{ status: str return { status: 'pending' }; } - return { status: 'ready', resource }; + return { + status: resource.status || 'unknown', + resource, + }; } export async function getAllResources(): Promise { @@ -114,3 +117,37 @@ export async function migrateResourcesStorageType( return { count: result.count }; } + +export async function createResource(data: { + fileId: string; + filename: string; + size: number; + mimeType?: string | null; + storageType: StorageType; + status?: string; + hash?: string; +}): Promise { + return prisma.resource.create({ + data: { + fileId: data.fileId, + title: data.filename, + type: data.mimeType, + storageType: data.storageType, + status: data.status || 'UPLOADING', + meta: { + size: data.size, + hash: data.hash, + }, + }, + }); +} + +export async function updateResourceStatus(fileId: string, status: string, additionalData?: any): Promise { + return prisma.resource.update({ + where: { fileId }, + data: { + status, + ...additionalData, + }, + }); +} diff --git a/packages/storage/src/index.ts b/packages/storage/src/index.ts new file mode 100644 index 0000000..ea4272a --- /dev/null +++ b/packages/storage/src/index.ts @@ -0,0 +1,21 @@ +// 类型定义 +export * from './types'; + +// 核心功能 +export * from './core'; + +// 数据库操作 +export * from './database'; + +// 服务层 +export * from './services'; + +// Hono 中间件 +export * from './middleware'; + +// 便捷的默认导出 +export { StorageManager } from './core'; +export { StorageUtils } from './services'; +export { getTusServer, handleTusRequest } from './services'; +export { startCleanupScheduler, triggerCleanup } from './services'; +export { createStorageApp, createStorageRoutes, createTusUploadRoutes, createFileDownloadRoutes } from './middleware'; diff --git a/packages/storage/src/middleware/hono.ts b/packages/storage/src/middleware/hono.ts new file mode 100644 index 0000000..34ef29b --- /dev/null +++ b/packages/storage/src/middleware/hono.ts @@ -0,0 +1,511 @@ +import { Hono } from 'hono'; +import { handleTusRequest, cleanupExpiredUploads, getStorageInfo } from '../services/tus'; +import { + getResourceByFileId, + getAllResources, + deleteResource, + updateResource, + getResourcesByStorageType, + getResourcesByStatus, + getUploadingResources, + getResourceStats, + migrateResourcesStorageType, +} from '../database/operations'; +import { StorageManager, validateStorageConfig } from '../core/adapter'; +import { StorageType, type StorageConfig } from '../types'; +import { prisma } from '@repo/db'; + +/** + * 创建存储相关的 Hono 路由 + * @param basePath 基础路径,默认为 '/api/storage' + * @returns Hono 应用实例 + */ +export function createStorageRoutes(basePath: string = '/api/storage') { + const app = new Hono(); + + // 获取文件资源信息 + app.get('/resource/:fileId', async (c) => { + const encodedFileId = c.req.param('fileId'); + const fileId = decodeURIComponent(encodedFileId); + console.log('API request - Encoded fileId:', encodedFileId); + console.log('API request - Decoded fileId:', fileId); + const result = await getResourceByFileId(fileId); + return c.json(result); + }); + + // 获取所有资源 + app.get('/resources', async (c) => { + const resources = await getAllResources(); + return c.json(resources); + }); + + // 根据存储类型获取资源 + app.get('/resources/storage/:storageType', async (c) => { + const storageType = c.req.param('storageType') as StorageType; + const resources = await getResourcesByStorageType(storageType); + return c.json(resources); + }); + + // 根据状态获取资源 + app.get('/resources/status/:status', async (c) => { + const status = c.req.param('status'); + const resources = await getResourcesByStatus(status); + return c.json(resources); + }); + + // 获取正在上传的资源 + app.get('/resources/uploading', async (c) => { + const resources = await getUploadingResources(); + return c.json(resources); + }); + + // 获取资源统计信息 + app.get('/stats', async (c) => { + const stats = await getResourceStats(); + return c.json(stats); + }); + + // 删除资源 + app.delete('/resource/:id', async (c) => { + const id = c.req.param('id'); + const result = await deleteResource(id); + return c.json(result); + }); + + // 更新资源 + app.patch('/resource/:id', async (c) => { + const id = c.req.param('id'); + const data = await c.req.json(); + const result = await updateResource(id, data); + return c.json(result); + }); + + // 迁移资源存储类型(批量更新数据库中的存储类型标记) + app.post('/migrate-storage', async (c) => { + try { + const { from, to } = await c.req.json(); + const result = await migrateResourcesStorageType(from as StorageType, to as StorageType); + return c.json({ + success: true, + message: `Migrated ${result.count} resources from ${from} to ${to}`, + count: result.count, + }); + } catch (error) { + console.error('Failed to migrate storage type:', error); + return c.json( + { + success: false, + error: error instanceof Error ? error.message : 'Unknown error', + }, + 400, + ); + } + }); + + // 清理过期上传 + app.post('/cleanup', async (c) => { + const result = await cleanupExpiredUploads(); + return c.json(result); + }); + + // 手动清理指定状态的资源 + app.post('/cleanup/by-status', async (c) => { + try { + const { status, olderThanDays } = await c.req.json(); + const cutoffDate = new Date(); + cutoffDate.setDate(cutoffDate.getDate() - (olderThanDays || 30)); + + const deletedResources = await prisma.resource.deleteMany({ + where: { + status, + createdAt: { + lt: cutoffDate, + }, + }, + }); + + return c.json({ + success: true, + message: `Deleted ${deletedResources.count} resources with status ${status}`, + count: deletedResources.count, + }); + } catch (error) { + console.error('Failed to cleanup by status:', error); + return c.json( + { + success: false, + error: error instanceof Error ? error.message : 'Unknown error', + }, + 400, + ); + } + }); + + // 获取存储信息 + app.get('/storage/info', async (c) => { + const storageInfo = getStorageInfo(); + return c.json(storageInfo); + }); + + // 切换存储类型(需要重启应用) + app.post('/storage/switch', async (c) => { + try { + const newConfig = (await c.req.json()) as StorageConfig; + const storageManager = StorageManager.getInstance(); + await storageManager.switchStorage(newConfig); + + return c.json({ + success: true, + message: 'Storage configuration updated. Please restart the application for changes to take effect.', + newType: newConfig.type, + }); + } catch (error) { + console.error('Failed to switch storage:', error); + return c.json( + { + success: false, + error: error instanceof Error ? error.message : 'Unknown error', + }, + 400, + ); + } + }); + + // 验证存储配置 + app.post('/storage/validate', async (c) => { + try { + const config = (await c.req.json()) as StorageConfig; + const errors = validateStorageConfig(config); + + if (errors.length > 0) { + return c.json({ valid: false, errors }, 400); + } + + return c.json({ valid: true, message: 'Storage configuration is valid' }); + } catch (error) { + return c.json( + { + valid: false, + errors: [error instanceof Error ? error.message : 'Invalid JSON'], + }, + 400, + ); + } + }); + + return app; +} + +/** + * 创建TUS上传处理路由 + * @param uploadPath 上传路径,默认为 '/upload' + * @returns Hono 应用实例 + */ +export function createTusUploadRoutes(uploadPath: string = '/upload') { + const app = new Hono(); + + // TUS 协议处理 - 使用通用处理器 + app.all('/*', async (c) => { + try { + // 创建适配的请求和响应对象 + const adaptedReq = createNodeRequestAdapter(c); + const adaptedRes = createNodeResponseAdapter(c); + + await handleTusRequest(adaptedReq, adaptedRes); + return adaptedRes.getResponse(); + } catch (error) { + console.error('TUS request error:', error); + return c.json({ error: 'Upload request failed' }, 500); + } + }); + + return app; +} + +// Node.js 请求适配器 +function createNodeRequestAdapter(c: any) { + const honoReq = c.req; + const url = new URL(honoReq.url); + + // 导入Node.js模块 + const { Readable } = require('stream'); + const { EventEmitter } = require('events'); + + // 创建一个继承自Readable的适配器类 + class TusRequestAdapter extends Readable { + method: string; + url: string; + headers: Record; + httpVersion: string; + complete: boolean; + private reader: ReadableStreamDefaultReader | null = null; + private _reading: boolean = false; + + constructor() { + super(); + this.method = honoReq.method; + this.url = url.pathname + url.search; + this.headers = honoReq.header() || {}; + this.httpVersion = '1.1'; + this.complete = false; + + // 如果有请求体,获取reader + if (honoReq.method !== 'GET' && honoReq.method !== 'HEAD' && honoReq.raw.body) { + this.reader = honoReq.raw.body.getReader(); + } + } + + _read() { + if (this._reading || !this.reader) { + return; + } + + this._reading = true; + + this.reader + .read() + .then(({ done, value }) => { + this._reading = false; + if (done) { + this.push(null); // 结束流 + this.complete = true; + } else { + // 确保我们推送的是正确的二进制数据 + const buffer = Buffer.from(value); + this.push(buffer); + } + }) + .catch((error) => { + this._reading = false; + this.destroy(error); + }); + } + + // 模拟IncomingMessage的destroy方法 + destroy(error?: Error) { + if (this.reader) { + this.reader.cancel().catch(() => { + // 忽略取消错误 + }); + this.reader = null; + } + super.destroy(error); + } + } + + return new TusRequestAdapter(); +} + +// Node.js 响应适配器 +function createNodeResponseAdapter(c: any) { + let statusCode = 200; + let headers: Record = {}; + let body: any = null; + + const adapter = { + statusCode, + setHeader: (name: string, value: string) => { + headers[name] = value; + }, + getHeader: (name: string) => { + return headers[name]; + }, + writeHead: (code: number, reasonOrHeaders?: any, headersObj?: any) => { + statusCode = code; + if (typeof reasonOrHeaders === 'object') { + Object.assign(headers, reasonOrHeaders); + } + if (headersObj) { + Object.assign(headers, headersObj); + } + }, + write: (chunk: any) => { + if (body === null) { + body = chunk; + } else if (typeof body === 'string' && typeof chunk === 'string') { + body += chunk; + } else { + // 处理 Buffer 或其他类型 + body = chunk; + } + }, + end: (data?: any) => { + if (data !== undefined) { + body = data; + } + }, + // 添加事件方法 + on: (event: string, handler: Function) => { + // 简单的空实现 + }, + emit: (event: string, ...args: any[]) => { + // 简单的空实现 + }, + // 获取最终的 Response 对象 + getResponse: () => { + if (body === null || body === undefined) { + return new Response(null, { + status: statusCode, + headers: headers, + }); + } + + return new Response(body, { + status: statusCode, + headers: headers, + }); + }, + }; + + return adapter; +} + +/** + * 创建文件下载路由(支持所有存储类型) + * @param downloadPath 下载路径,默认为 '/download' + * @returns Hono 应用实例 + */ +export function createFileDownloadRoutes(downloadPath: string = '/download') { + const app = new Hono(); + + // 通过文件ID下载文件 + app.get('/:fileId', async (c) => { + try { + // 获取并解码fileId + const encodedFileId = c.req.param('fileId'); + const fileId = decodeURIComponent(encodedFileId); + + console.log('Download request - Encoded fileId:', encodedFileId); + console.log('Download request - Decoded fileId:', fileId); + + const storageManager = StorageManager.getInstance(); + const storageType = storageManager.getStorageType(); + + // 从数据库获取文件信息 + const { status, resource } = await getResourceByFileId(fileId); + if (status !== 'UPLOADED' || !resource) { + return c.json({ error: `File not found or not ready. Status: ${status}, FileId: ${fileId}` }, 404); + } + + if (storageType === StorageType.LOCAL) { + // 本地存储:直接读取文件 + const config = storageManager.getStorageConfig(); + const uploadDir = config.local?.directory || './uploads'; + + // fileId 是目录路径格式,直接使用 + const fileDir = `${uploadDir}/${fileId}`; + + try { + // 使用 Node.js fs 而不是 Bun.file + const fs = await import('fs'); + const path = await import('path'); + + // 检查目录是否存在 + if (!fs.existsSync(fileDir)) { + return c.json({ error: `File directory not found: ${fileDir}` }, 404); + } + + // 读取目录内容,找到实际的文件(排除 .json 文件) + const files = fs.readdirSync(fileDir).filter((f) => !f.endsWith('.json')); + if (files.length === 0) { + return c.json({ error: `No file found in directory: ${fileDir}` }, 404); + } + + // 通常只有一个文件,取第一个 + const actualFileName = files[0]; + if (!actualFileName) { + return c.json({ error: 'No valid file found' }, 404); + } + const filePath = path.join(fileDir, actualFileName); + + // 获取文件统计信息 + const stats = fs.statSync(filePath); + const fileSize = stats.size; + + // 设置响应头 + c.header('Content-Type', resource.type || 'application/octet-stream'); + c.header('Content-Length', fileSize.toString()); + c.header('Content-Disposition', `inline; filename="${actualFileName}"`); + + // 返回文件流 + const fileStream = fs.createReadStream(filePath); + return new Response(fileStream as any); + } catch (error) { + console.error('Error reading local file:', error); + return c.json({ error: 'Failed to read file' }, 500); + } + } else if (storageType === StorageType.S3) { + // S3 存储:通过已配置的dataStore获取文件信息 + const dataStore = storageManager.getDataStore(); + + try { + // 对于S3存储,我们需要根据fileId构建完整路径 + // 由于S3Store的client是私有的,我们先尝试通过getUpload来验证文件存在 + await (dataStore as any).getUpload(fileId + '/dummy'); // 这会失败,但能验证连接 + } catch (error: any) { + // 如果是FILE_NOT_FOUND以外的错误,说明连接有问题 + if (error.message && !error.message.includes('FILE_NOT_FOUND')) { + console.error('S3 connection error:', error); + return c.json({ error: 'Failed to access S3 storage' }, 500); + } + } + + // 构建S3 URL - 使用resource信息重建完整路径 + // 这里我们假设文件名就是resource.title + const config = storageManager.getStorageConfig(); + const s3Config = config.s3!; + const fileName = resource.title || 'file'; + const fullS3Key = `${fileId}/${fileName}`; + + // 生成 S3 URL + let s3Url: string; + if (s3Config.endpoint && s3Config.endpoint !== 'https://s3.amazonaws.com') { + // 自定义 S3 兼容服务 + s3Url = `${s3Config.endpoint}/${s3Config.bucket}/${fullS3Key}`; + } else { + // AWS S3 + s3Url = `https://${s3Config.bucket}.s3.${s3Config.region}.amazonaws.com/${fullS3Key}`; + } + + console.log(`Redirecting to S3 URL: ${s3Url}`); + // 重定向到 S3 URL + return c.redirect(s3Url, 302); + } + + return c.json({ error: 'Unsupported storage type' }, 500); + } catch (error) { + console.error('Download error:', error); + return c.json({ error: 'Internal server error' }, 500); + } + }); + + return app; +} + +/** + * 创建完整的存储应用,包含API和上传功能 + * @param options 配置选项 + * @returns Hono 应用实例 + */ +export function createStorageApp( + options: { + apiBasePath?: string; + uploadPath?: string; + downloadPath?: string; + } = {}, +) { + const { apiBasePath = '/api/storage', uploadPath = '/upload', downloadPath = '/download' } = options; + + const app = new Hono(); + + // 添加存储API路由 + app.route(apiBasePath, createStorageRoutes()); + + // 添加TUS上传路由 + app.route(uploadPath, createTusUploadRoutes()); + + // 添加文件下载路由 + app.route(downloadPath, createFileDownloadRoutes()); + + return app; +} diff --git a/packages/storage/src/middleware/index.ts b/packages/storage/src/middleware/index.ts new file mode 100644 index 0000000..23a3b24 --- /dev/null +++ b/packages/storage/src/middleware/index.ts @@ -0,0 +1,5 @@ +// Hono 中间件 +export * from './hono'; + +// 便捷导出 +export { createStorageApp, createStorageRoutes, createTusUploadRoutes, createFileDownloadRoutes } from './hono'; diff --git a/packages/storage/src/services/index.ts b/packages/storage/src/services/index.ts new file mode 100644 index 0000000..4b689b1 --- /dev/null +++ b/packages/storage/src/services/index.ts @@ -0,0 +1,13 @@ +// TUS 上传处理 +export * from './tus'; + +// 存储工具 +export * from './utils'; + +// 调度器 +export * from './scheduler'; + +// 便捷导出 +export { StorageUtils } from './utils'; +export { getTusServer, handleTusRequest } from './tus'; +export { startCleanupScheduler, triggerCleanup } from './scheduler'; diff --git a/apps/backend/src/upload/scheduler.ts b/packages/storage/src/services/scheduler.ts similarity index 100% rename from apps/backend/src/upload/scheduler.ts rename to packages/storage/src/services/scheduler.ts diff --git a/apps/backend/src/upload/tus.ts b/packages/storage/src/services/tus.ts similarity index 59% rename from apps/backend/src/upload/tus.ts rename to packages/storage/src/services/tus.ts index d98c7fe..c073386 100644 --- a/apps/backend/src/upload/tus.ts +++ b/packages/storage/src/services/tus.ts @@ -1,9 +1,9 @@ import { Server, Upload } from '@repo/tus'; import { prisma } from '@repo/db'; -import { getFilenameWithoutExt } from '../utils/file'; import { nanoid } from 'nanoid'; import { slugify } from 'transliteration'; -import { StorageManager } from './storage.adapter'; +import { StorageManager } from '../core/adapter'; +import { createResource, updateResourceStatus } from '../database/operations'; const FILE_UPLOAD_CONFIG = { maxSizeBytes: 20_000_000_000, // 20GB @@ -32,40 +32,44 @@ function getFileId(uploadId: string) { return uploadId.replace(/\/[^/]+$/, ''); } +function getFilenameWithoutExt(filename: string): string { + const lastDotIndex = filename.lastIndexOf('.'); + return lastDotIndex > 0 ? filename.substring(0, lastDotIndex) : filename; +} + async function handleUploadCreate(req: any, res: any, upload: Upload, url: string) { try { + console.log(`[TUS] Upload create event for ${upload.id}, size: ${upload.size}, metadata:`, upload.metadata); const fileId = getFileId(upload.id); const storageManager = StorageManager.getInstance(); - await prisma.resource.create({ - data: { - title: getFilenameWithoutExt(upload.metadata?.filename || 'untitled'), - fileId, // 移除最后的文件名 - url: upload.id, - meta: upload.metadata, - status: ResourceStatus.UPLOADING, - storageType: storageManager.getStorageType(), // 记录存储类型 - }, + await createResource({ + fileId, + filename: upload.metadata?.filename || 'untitled', + size: upload.size || 0, + mimeType: upload.metadata?.filetype, + storageType: storageManager.getStorageType(), + status: ResourceStatus.UPLOADING, }); - console.log(`Resource created for ${upload.id} using ${storageManager.getStorageType()} storage`); + console.log(`[TUS] Resource created for ${upload.id} using ${storageManager.getStorageType()} storage`); } catch (error) { - console.error('Failed to create resource during upload', error); + console.error('[TUS] Failed to create resource during upload:', error); + // 不抛出错误,让上传继续进行 } } async function handleUploadFinish(req: any, res: any, upload: Upload) { try { - const resource = await prisma.resource.update({ - where: { fileId: getFileId(upload.id) }, - data: { status: ResourceStatus.UPLOADED }, - }); + console.log(`[TUS] Upload finish event for ${upload.id}, final size: ${upload.size}, offset: ${upload.offset}`); + const fileId = getFileId(upload.id); + await updateResourceStatus(fileId, ResourceStatus.UPLOADED); // TODO: 这里可以添加队列处理逻辑 // fileQueue.add(QueueJobType.FILE_PROCESS, { resource }, { jobId: resource.id }); - console.log(`Upload finished ${resource.url} using ${StorageManager.getInstance().getStorageType()} storage`); + console.log(`[TUS] Upload finished ${upload.id} using ${StorageManager.getInstance().getStorageType()} storage`); } catch (error) { - console.error('Failed to update resource after upload', error); + console.error('[TUS] Failed to update resource after upload:', error); } } @@ -78,6 +82,8 @@ function initializeTusServer() { const storageManager = StorageManager.getInstance(); const dataStore = storageManager.getDataStore(); + console.log(`[TUS] Initializing TUS server with ${storageManager.getStorageType()} storage`); + tusServer = new Server({ namingFunction(req, metadata) { const safeFilename = slugify(metadata?.filename || 'untitled'); @@ -86,7 +92,9 @@ function initializeTusServer() { const month = String(now.getMonth() + 1).padStart(2, '0'); const day = String(now.getDate()).padStart(2, '0'); const uniqueId = nanoid(10); - return `${year}/${month}/${day}/${uniqueId}/${safeFilename}`; + const fileName = `${year}/${month}/${day}/${uniqueId}/${safeFilename}`; + console.log(`[TUS] Generated filename: ${fileName} for upload with metadata:`, metadata); + return fileName; }, path: '/upload', datastore: dataStore, // 使用存储适配器 @@ -94,7 +102,20 @@ function initializeTusServer() { postReceiveInterval: 1000, getFileIdFromRequest: (req, lastPath) => { const match = req.url?.match(/\/upload\/(.+)/); - return match ? match[1] : lastPath; + const fileId = match ? match[1] : lastPath; + console.log(`[TUS] Extracted file ID: ${fileId} from URL: ${req.url}`); + return fileId; + }, + onIncomingRequest: async (req, res, id) => { + console.log(`[TUS] Incoming request for ${id}, method: ${req.method}, url: ${req.url}`); + }, + onUploadCreate: async (req, res, upload) => { + console.log(`[TUS] onUploadCreate called for ${upload.id}`); + return res; + }, + onUploadFinish: async (req, res, upload) => { + console.log(`[TUS] onUploadFinish called for ${upload.id}`); + return res; }, }); @@ -102,7 +123,12 @@ function initializeTusServer() { tusServer.on('POST_CREATE', handleUploadCreate); tusServer.on('POST_FINISH', handleUploadFinish); - console.log(`TUS server initialized with ${storageManager.getStorageType()} storage`); + // 添加错误处理 + tusServer.on('error', (error) => { + console.error('[TUS] Server error:', error); + }); + + console.log(`[TUS] TUS server initialized with ${storageManager.getStorageType()} storage`); return tusServer; } diff --git a/apps/backend/src/upload/storage.utils.ts b/packages/storage/src/services/utils.ts similarity index 71% rename from apps/backend/src/upload/storage.utils.ts rename to packages/storage/src/services/utils.ts index 1c37c8d..68fb5e4 100644 --- a/apps/backend/src/upload/storage.utils.ts +++ b/packages/storage/src/services/utils.ts @@ -1,4 +1,5 @@ -import { StorageManager, StorageType } from './storage.adapter'; +import { StorageManager } from '../core/adapter'; +import { StorageType } from '../types'; import path from 'path'; /** @@ -20,37 +21,46 @@ export class StorageUtils { } /** - * 生成文件访问URL + * 生成文件访问URL(统一使用下载接口) * @param fileId 文件ID - * @param isPublic 是否为公开访问链接 + * @param baseUrl 基础URL(可选,用于生成完整URL) * @returns 文件访问URL */ - public generateFileUrl(fileId: string, isPublic: boolean = false): string { + public generateFileUrl(fileId: string, baseUrl?: string): string { + const base = baseUrl || 'http://localhost:3000'; + return `${base}/download/${fileId}`; + } + + /** + * 生成文件下载URL(与 generateFileUrl 相同,保持兼容性) + * @param fileId 文件ID + * @param baseUrl 基础URL(可选,用于生成完整URL) + * @returns 文件下载URL + */ + public generateDownloadUrl(fileId: string, baseUrl?: string): string { + return this.generateFileUrl(fileId, baseUrl); + } + + /** + * 生成直接访问URL(仅用于S3存储) + * @param fileId 文件ID + * @returns S3直接访问URL + */ + public generateDirectUrl(fileId: string): string { const storageType = this.storageManager.getStorageType(); const config = this.storageManager.getStorageConfig(); - switch (storageType) { - case StorageType.LOCAL: - // 本地存储返回相对路径或服务器路径 - if (isPublic) { - // 假设有一个静态文件服务 - return `/uploads/${fileId}`; - } - return path.join(config.local?.directory || './uploads', fileId); - - case StorageType.S3: - // S3 存储返回对象存储路径 - const s3Config = config.s3!; - if (s3Config.endpoint && s3Config.endpoint !== 'https://s3.amazonaws.com') { - // 自定义 S3 兼容服务 - return `${s3Config.endpoint}/${s3Config.bucket}/${fileId}`; - } - // AWS S3 - return `https://${s3Config.bucket}.s3.${s3Config.region}.amazonaws.com/${fileId}`; - - default: - throw new Error(`Unsupported storage type: ${storageType}`); + if (storageType !== StorageType.S3) { + throw new Error('Direct URL is only available for S3 storage'); } + + const s3Config = config.s3!; + if (s3Config.endpoint && s3Config.endpoint !== 'https://s3.amazonaws.com') { + // 自定义 S3 兼容服务 + return `${s3Config.endpoint}/${s3Config.bucket}/${fileId}`; + } + // AWS S3 + return `https://${s3Config.bucket}.s3.${s3Config.region}.amazonaws.com/${fileId}`; } /** @@ -199,4 +209,29 @@ export class StorageUtils { return stats; } + + /** + * 清理过期文件 + */ + public async cleanupExpiredFiles(): Promise<{ deletedCount: number }> { + const storageType = this.storageManager.getStorageType(); + const config = this.storageManager.getStorageConfig(); + let deletedCount = 0; + + // 获取过期时间配置 + const expirationMs = + storageType === StorageType.LOCAL + ? config.local?.expirationPeriodInMilliseconds + : config.s3?.expirationPeriodInMilliseconds; + + if (!expirationMs || expirationMs <= 0) { + // 没有配置过期时间,不执行清理 + return { deletedCount: 0 }; + } + + // TODO: 实现具体的清理逻辑 + // 这里需要根据存储类型和数据库记录来清理过期文件 + + return { deletedCount }; + } } diff --git a/packages/storage/src/types/index.ts b/packages/storage/src/types/index.ts new file mode 100644 index 0000000..2c6d6cc --- /dev/null +++ b/packages/storage/src/types/index.ts @@ -0,0 +1,51 @@ +export interface UploadCompleteEvent { + identifier: string; + filename: string; + size: number; + hash: string; + integrityVerified: boolean; +} + +export type UploadEvent = { + uploadStart: { + identifier: string; + filename: string; + totalSize: number; + resuming?: boolean; + }; + uploadComplete: UploadCompleteEvent; + uploadError: { identifier: string; error: string; filename: string }; +}; + +export interface UploadLock { + clientId: string; + timestamp: number; +} + +// 存储类型枚举 +export enum StorageType { + LOCAL = 'local', + S3 = 's3', +} + +// 存储配置接口 +export interface StorageConfig { + type: StorageType; + // 本地存储配置 + local?: { + directory: string; + expirationPeriodInMilliseconds?: number; + }; + // S3 存储配置 + s3?: { + bucket: string; + region: string; + accessKeyId: string; + secretAccessKey: string; + endpoint?: string; // 用于兼容其他 S3 兼容服务 + forcePathStyle?: boolean; + partSize?: number; + maxConcurrentPartUploads?: number; + expirationPeriodInMilliseconds?: number; + }; +} diff --git a/packages/storage/tsconfig.json b/packages/storage/tsconfig.json new file mode 100644 index 0000000..3a8900e --- /dev/null +++ b/packages/storage/tsconfig.json @@ -0,0 +1,29 @@ +{ + "extends": "../../tsconfig.json", + "compilerOptions": { + "outDir": "./dist", + "rootDir": "./src", + "declaration": true, + "declarationMap": true, + "sourceMap": true, + "target": "ES2020", + "module": "ESNext", + "moduleResolution": "bundler", + "allowSyntheticDefaultImports": true, + "esModuleInterop": true, + "strict": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true, + "isolatedModules": true, + "noEmitOnError": false + }, + "include": [ + "src/**/*" + ], + "exclude": [ + "dist", + "node_modules", + "**/*.test.ts", + "**/*.spec.ts" + ] +} \ No newline at end of file diff --git a/packages/tus/src/handlers/BaseHandler.ts b/packages/tus/src/handlers/BaseHandler.ts index 2315295..521bcfe 100644 --- a/packages/tus/src/handlers/BaseHandler.ts +++ b/packages/tus/src/handlers/BaseHandler.ts @@ -1,11 +1,11 @@ -import EventEmitter from 'node:events' -import stream from 'node:stream/promises' -import { addAbortSignal, PassThrough } from 'node:stream' -import type http from 'node:http' +import EventEmitter from 'node:events'; +import stream from 'node:stream/promises'; +import { addAbortSignal, PassThrough } from 'node:stream'; +import type http from 'node:http'; -import type { ServerOptions } from '../types' -import throttle from 'lodash.throttle' -import { CancellationContext, DataStore, ERRORS, EVENTS, StreamLimiter, Upload } from '../utils' +import type { ServerOptions } from '../types'; +import throttle from 'lodash.throttle'; +import { CancellationContext, DataStore, ERRORS, EVENTS, StreamLimiter, Upload } from '../utils'; /** * 正则表达式,用于从请求 URL 中提取文件 ID。 @@ -16,7 +16,7 @@ import { CancellationContext, DataStore, ERRORS, EVENTS, StreamLimiter, Upload } * - 输入 `/files/12345`,匹配结果为 `12345`。 * - 输入 `/files/12345/`,匹配结果为 `12345`。 */ -const reExtractFileID = /([^/]+)\/?$/ +const reExtractFileID = /([^/]+)\/?$/; /** * 正则表达式,用于从 HTTP 请求头中的 `forwarded` 字段提取主机名。 @@ -27,7 +27,7 @@ const reExtractFileID = /([^/]+)\/?$/ * - 输入 `host="example.com"`,匹配结果为 `example.com`。 * - 输入 `host=example.com`,匹配结果为 `example.com`。 */ -const reForwardedHost = /host="?([^";]+)/ +const reForwardedHost = /host="?([^";]+)/; /** * 正则表达式,用于从 HTTP 请求头中的 `forwarded` 字段提取协议(如 `http` 或 `https`)。 @@ -38,327 +38,308 @@ const reForwardedHost = /host="?([^";]+)/ * - 输入 `proto=https`,匹配结果为 `https`。 * - 输入 `proto=http`,匹配结果为 `http`。 */ -const reForwardedProto = /proto=(https?)/ +const reForwardedProto = /proto=(https?)/; /** * BaseHandler 类是一个基础处理器,用于处理 TUS 协议的上传请求。 * 它继承自 Node.js 的 EventEmitter,允许发出和监听事件。 */ export class BaseHandler extends EventEmitter { - options: ServerOptions - store: DataStore + options: ServerOptions; + store: DataStore; - /** - * 构造函数,初始化 BaseHandler 实例。 - * @param store - 数据存储对象,用于处理上传数据的存储。 - * @param options - 服务器配置选项。 - * @throws 如果未提供 store 参数,则抛出错误。 - */ - constructor(store: DataStore, options: ServerOptions) { - super() - if (!store) { - throw new Error('Store must be defined') - } + /** + * 构造函数,初始化 BaseHandler 实例。 + * @param store - 数据存储对象,用于处理上传数据的存储。 + * @param options - 服务器配置选项。 + * @throws 如果未提供 store 参数,则抛出错误。 + */ + constructor(store: DataStore, options: ServerOptions) { + super(); + if (!store) { + throw new Error('Store must be defined'); + } - this.store = store - this.options = options - } + this.store = store; + this.options = options; + } - /** - * 向客户端发送 HTTP 响应。 - * @param res - HTTP 响应对象。 - * @param status - HTTP 状态码。 - * @param headers - 响应头对象。 - * @param body - 响应体内容。 - * @returns 返回结束的响应对象。 - */ - write(res: http.ServerResponse, status: number, headers = {}, body = '') { - if (status !== 204) { - // @ts-expect-error not explicitly typed but possible - headers['Content-Length'] = Buffer.byteLength(body, 'utf8') - } + /** + * 向客户端发送 HTTP 响应。 + * @param res - HTTP 响应对象。 + * @param status - HTTP 状态码。 + * @param headers - 响应头对象。 + * @param body - 响应体内容。 + * @returns 返回结束的响应对象。 + */ + write(res: http.ServerResponse, status: number, headers = {}, body = '') { + if (status !== 204) { + (headers as any)['Content-Length'] = Buffer.byteLength(body, 'utf8'); + } - res.writeHead(status, headers) - res.write(body) - return res.end() - } + res.writeHead(status, headers); + res.write(body); + return res.end(); + } - /** - * 生成上传文件的 URL。 - * @param req - HTTP 请求对象。 - * @param id - 文件 ID。 - * @returns 返回生成的 URL。 - */ - generateUrl(req: http.IncomingMessage, id: string) { - const path = this.options.path === '/' ? '' : this.options.path - if (this.options.generateUrl) { - // 使用用户定义的 generateUrl 函数生成 URL - const { proto, host } = this.extractHostAndProto(req) - return this.options.generateUrl(req, { - proto, - host, - path: path, - id, - }) - } + /** + * 生成上传文件的 URL。 + * @param req - HTTP 请求对象。 + * @param id - 文件 ID。 + * @returns 返回生成的 URL。 + */ + generateUrl(req: http.IncomingMessage, id: string) { + const path = this.options.path === '/' ? '' : this.options.path; + if (this.options.generateUrl) { + // 使用用户定义的 generateUrl 函数生成 URL + const { proto, host } = this.extractHostAndProto(req); + return this.options.generateUrl(req, { + proto, + host, + path: path, + id, + }); + } - // 默认实现 - if (this.options.relativeLocation) { - return `${path}/${id}` - } + // 默认实现 + if (this.options.relativeLocation) { + return `${path}/${id}`; + } - const { proto, host } = this.extractHostAndProto(req) + const { proto, host } = this.extractHostAndProto(req); - return `${proto}://${host}${path}/${id}` - } + return `${proto}://${host}${path}/${id}`; + } - /** - * 从请求中提取文件 ID。 - * @param req - HTTP 请求对象。 - * @returns 返回提取的文件 ID,如果未找到则返回 undefined。 - */ - getFileIdFromRequest(req: http.IncomingMessage) { - const match = reExtractFileID.exec(req.url as string) + /** + * 从请求中提取文件 ID。 + * @param req - HTTP 请求对象。 + * @returns 返回提取的文件 ID,如果未找到则返回 undefined。 + */ + getFileIdFromRequest(req: http.IncomingMessage) { + const match = reExtractFileID.exec(req.url as string); - if (this.options.getFileIdFromRequest) { - const lastPath = match ? decodeURIComponent(match[1]) : undefined - return this.options.getFileIdFromRequest(req, lastPath) - } + if (this.options.getFileIdFromRequest) { + const lastPath = match?.[1] ? decodeURIComponent(match[1]) : undefined; + return this.options.getFileIdFromRequest(req, lastPath); + } - if (!match || this.options.path.includes(match[1])) { - return - } + if (!match?.[1] || this.options.path.includes(match[1])) { + return; + } - return decodeURIComponent(match[1]) - } + return decodeURIComponent(match[1]); + } - /** - * 从 HTTP 请求中提取主机名和协议信息。 - * 该方法首先检查是否启用了尊重转发头(respectForwardedHeaders)选项, - * 如果启用,则从请求头中提取转发的主机名和协议信息。 - * 如果未启用或未找到转发信息,则使用请求头中的主机名和默认协议(http)。 - * - * @param req - HTTP 请求对象,包含请求头等信息。 - * @returns 返回包含主机名和协议的对象。 - */ - protected extractHostAndProto(req: http.IncomingMessage) { - let proto: string | undefined - let host: string | undefined + /** + * 从 HTTP 请求中提取主机名和协议信息。 + * 该方法首先检查是否启用了尊重转发头(respectForwardedHeaders)选项, + * 如果启用,则从请求头中提取转发的主机名和协议信息。 + * 如果未启用或未找到转发信息,则使用请求头中的主机名和默认协议(http)。 + * + * @param req - HTTP 请求对象,包含请求头等信息。 + * @returns 返回包含主机名和协议的对象。 + */ + protected extractHostAndProto(req: http.IncomingMessage) { + let proto: string | undefined; + let host: string | undefined; - // 如果启用了尊重转发头选项 - if (this.options.respectForwardedHeaders) { - // 从请求头中获取 forwarded 字段 - const forwarded = req.headers.forwarded as string | undefined - if (forwarded) { - // 使用正则表达式从 forwarded 字段中提取主机名和协议 - host ??= reForwardedHost.exec(forwarded)?.[1] - proto ??= reForwardedProto.exec(forwarded)?.[1] - } + // 如果启用了尊重转发头选项 + if (this.options.respectForwardedHeaders) { + // 从请求头中获取 forwarded 字段 + const forwarded = req.headers.forwarded as string | undefined; + if (forwarded) { + // 使用正则表达式从 forwarded 字段中提取主机名和协议 + host ??= reForwardedHost.exec(forwarded)?.[1]; + proto ??= reForwardedProto.exec(forwarded)?.[1]; + } - // 从请求头中获取 x-forwarded-host 和 x-forwarded-proto 字段 - const forwardHost = req.headers['x-forwarded-host'] - const forwardProto = req.headers['x-forwarded-proto'] + // 从请求头中获取 x-forwarded-host 和 x-forwarded-proto 字段 + const forwardHost = req.headers['x-forwarded-host']; + const forwardProto = req.headers['x-forwarded-proto']; - // 检查 x-forwarded-proto 是否为有效的协议(http 或 https) - // @ts-expect-error we can pass undefined - if (['http', 'https'].includes(forwardProto)) { - proto ??= forwardProto as string - } + // 检查 x-forwarded-proto 是否为有效的协议(http 或 https) + // @ts-expect-error we can pass undefined + if (['http', 'https'].includes(forwardProto)) { + proto ??= forwardProto as string; + } - // 如果 x-forwarded-host 存在,则使用它作为主机名 - host ??= forwardHost as string - } + // 如果 x-forwarded-host 存在,则使用它作为主机名 + host ??= forwardHost as string; + } - // 如果未从转发头中获取到主机名,则使用请求头中的 host 字段 - host ??= req.headers.host - // 如果未从转发头中获取到协议,则默认使用 http - proto ??= 'http' + // 如果未从转发头中获取到主机名,则使用请求头中的 host 字段 + host ??= req.headers.host; + // 如果未从转发头中获取到协议,则默认使用 http + proto ??= 'http'; - // 返回包含主机名和协议的对象 - return { host: host as string, proto } - } + // 返回包含主机名和协议的对象 + return { host: host as string, proto }; + } - /** - * 获取锁对象。 - * @param req - HTTP 请求对象。 - * @returns 返回锁对象。 - */ - protected async getLocker(req: http.IncomingMessage) { - if (typeof this.options.locker === 'function') { - return this.options.locker(req) - } - return this.options.locker - } + /** + * 获取锁对象。 + * @param req - HTTP 请求对象。 + * @returns 返回锁对象。 + */ + protected async getLocker(req: http.IncomingMessage) { + if (typeof this.options.locker === 'function') { + return this.options.locker(req); + } + return this.options.locker; + } - /** - * 获取锁并锁定资源。 - * @param req - HTTP 请求对象。 - * @param id - 文件 ID。 - * @param context - 取消上下文对象。 - * @returns 返回锁对象。 - */ - protected async acquireLock( - req: http.IncomingMessage, - id: string, - context: CancellationContext - ) { - const locker = await this.getLocker(req) + /** + * 获取锁并锁定资源。 + * @param req - HTTP 请求对象。 + * @param id - 文件 ID。 + * @param context - 取消上下文对象。 + * @returns 返回锁对象。 + */ + protected async acquireLock(req: http.IncomingMessage, id: string, context: CancellationContext) { + const locker = await this.getLocker(req); - const lock = locker.newLock(id) + const lock = locker.newLock(id); - await lock.lock(() => { - context.cancel() - }) + await lock.lock(() => { + context.cancel(); + }); - return lock - } + return lock; + } + /** + * 将请求体数据写入存储。 + * 该方法负责将 HTTP 请求体中的数据流式传输到存储系统中,同时处理取消操作、错误处理和进度更新。 + * + * @param req - HTTP 请求对象,包含请求体数据流。 + * @param upload - 上传对象,包含上传的元数据(如文件 ID、偏移量等)。 + * @param maxFileSize - 允许的最大文件大小,用于限制写入的数据量。 + * @param context - 取消上下文对象,用于处理取消操作。 + * @returns 返回一个 Promise,解析为写入的字节数。 + */ + protected writeToStore(req: http.IncomingMessage, upload: Upload, maxFileSize: number, context: CancellationContext) { + // 使用 Promise 包装异步操作,以便更好地处理取消和错误。 + // biome-ignore lint/suspicious/noAsyncPromiseExecutor: + return new Promise(async (resolve, reject) => { + // 检查是否已被取消,如果已取消则直接拒绝 Promise。 + if (context.signal.aborted) { + reject(ERRORS.ABORTED); + return; + } - /** - * 将请求体数据写入存储。 - * 该方法负责将 HTTP 请求体中的数据流式传输到存储系统中,同时处理取消操作、错误处理和进度更新。 - * - * @param req - HTTP 请求对象,包含请求体数据流。 - * @param upload - 上传对象,包含上传的元数据(如文件 ID、偏移量等)。 - * @param maxFileSize - 允许的最大文件大小,用于限制写入的数据量。 - * @param context - 取消上下文对象,用于处理取消操作。 - * @returns 返回一个 Promise,解析为写入的字节数。 - */ - protected writeToStore( - req: http.IncomingMessage, - upload: Upload, - maxFileSize: number, - context: CancellationContext - ) { - // 使用 Promise 包装异步操作,以便更好地处理取消和错误。 - // biome-ignore lint/suspicious/noAsyncPromiseExecutor: - return new Promise(async (resolve, reject) => { - // 检查是否已被取消,如果已取消则直接拒绝 Promise。 - if (context.signal.aborted) { - reject(ERRORS.ABORTED) - return - } + // 创建一个 PassThrough 流作为代理,用于管理请求流。 + // PassThrough 流是一个透明的流,它允许数据通过而不进行任何修改。 + // 使用代理流的好处是可以在不影响原始请求流的情况下中止写入过程。 + const proxy = new PassThrough(); + // 将取消信号与代理流关联,以便在取消时自动中止流。 + addAbortSignal(context.signal, proxy); + // 监听代理流的错误事件,处理流中的错误。 + proxy.on('error', (err) => { + // 取消请求流与代理流的管道连接。 + req.unpipe(proxy); + // 如果错误是 AbortError,则返回 ABORTED 错误,否则返回原始错误。 + reject(err.name === 'AbortError' ? ERRORS.ABORTED : err); + }); + // 使用 throttle 函数创建一个节流函数,用于定期触发 POST_RECEIVE_V2 事件。 + // 该事件用于通知上传进度,避免频繁触发事件导致性能问题。 + const postReceive = throttle( + (offset: number) => { + // 触发 POST_RECEIVE_V2 事件,传递当前上传的偏移量。 + this.emit(EVENTS.POST_RECEIVE_V2, req, { ...upload, offset }); + }, + // 设置节流的时间间隔,避免事件触发过于频繁。 + this.options.postReceiveInterval, + { leading: false }, + ); + // 临时变量,用于跟踪当前写入的偏移量。 + let tempOffset = upload.offset; + // 监听代理流的 data 事件,每当有数据块通过时更新偏移量并触发进度事件。 + proxy.on('data', (chunk: Buffer) => { + tempOffset += chunk.byteLength; + postReceive(tempOffset); + }); + // 监听请求流的 error 事件,处理请求流中的错误。 + req.on('error', () => { + // 如果代理流未关闭,则优雅地结束流,以便将剩余的字节作为 incompletePart 上传到存储。 + if (!proxy.closed) { + proxy.end(); + } + }); + // 使用 stream.pipeline 将请求流通过代理流和 StreamLimiter 传输到存储系统。 + // StreamLimiter 用于限制写入的数据量,确保不超过最大文件大小。 + stream + .pipeline( + // 将请求流通过代理流传输。 + req.pipe(proxy), + // 使用 StreamLimiter 限制写入的数据量。 + new StreamLimiter(maxFileSize), + // 将数据流写入存储系统。 + async (stream) => { + return this.store.write(stream as StreamLimiter, upload.id, upload.offset); + }, + ) + // 如果管道操作成功,则解析 Promise 并返回写入的字节数。 + .then(resolve) + // 如果管道操作失败,则拒绝 Promise 并返回错误。 + .catch(reject); + }); + } - // 创建一个 PassThrough 流作为代理,用于管理请求流。 - // PassThrough 流是一个透明的流,它允许数据通过而不进行任何修改。 - // 使用代理流的好处是可以在不影响原始请求流的情况下中止写入过程。 - const proxy = new PassThrough() - // 将取消信号与代理流关联,以便在取消时自动中止流。 - addAbortSignal(context.signal, proxy) - // 监听代理流的错误事件,处理流中的错误。 - proxy.on('error', (err) => { - // 取消请求流与代理流的管道连接。 - req.unpipe(proxy) - // 如果错误是 AbortError,则返回 ABORTED 错误,否则返回原始错误。 - reject(err.name === 'AbortError' ? ERRORS.ABORTED : err) - }) - // 使用 throttle 函数创建一个节流函数,用于定期触发 POST_RECEIVE_V2 事件。 - // 该事件用于通知上传进度,避免频繁触发事件导致性能问题。 - const postReceive = throttle( - (offset: number) => { - // 触发 POST_RECEIVE_V2 事件,传递当前上传的偏移量。 - this.emit(EVENTS.POST_RECEIVE_V2, req, { ...upload, offset }) - }, - // 设置节流的时间间隔,避免事件触发过于频繁。 - this.options.postReceiveInterval, - { leading: false } - ) - // 临时变量,用于跟踪当前写入的偏移量。 - let tempOffset = upload.offset - // 监听代理流的 data 事件,每当有数据块通过时更新偏移量并触发进度事件。 - proxy.on('data', (chunk: Buffer) => { - tempOffset += chunk.byteLength - postReceive(tempOffset) - }) - // 监听请求流的 error 事件,处理请求流中的错误。 - req.on('error', () => { - // 如果代理流未关闭,则优雅地结束流,以便将剩余的字节作为 incompletePart 上传到存储。 - if (!proxy.closed) { - proxy.end() - } - }) - // 使用 stream.pipeline 将请求流通过代理流和 StreamLimiter 传输到存储系统。 - // StreamLimiter 用于限制写入的数据量,确保不超过最大文件大小。 - stream - .pipeline( - // 将请求流通过代理流传输。 - req.pipe(proxy), - // 使用 StreamLimiter 限制写入的数据量。 - new StreamLimiter(maxFileSize), - // 将数据流写入存储系统。 - async (stream) => { - return this.store.write(stream as StreamLimiter, upload.id, upload.offset) - } - ) - // 如果管道操作成功,则解析 Promise 并返回写入的字节数。 - .then(resolve) - // 如果管道操作失败,则拒绝 Promise 并返回错误。 - .catch(reject) - }) - } + /** + * 获取配置的最大文件大小。 + * @param req - HTTP 请求对象。 + * @param id - 文件 ID。 + * @returns 返回配置的最大文件大小。 + */ + getConfiguredMaxSize(req: http.IncomingMessage, id: string | null) { + if (typeof this.options.maxSize === 'function') { + return this.options.maxSize(req, id); + } + return this.options.maxSize ?? 0; + } - /** - * 获取配置的最大文件大小。 - * @param req - HTTP 请求对象。 - * @param id - 文件 ID。 - * @returns 返回配置的最大文件大小。 - */ - getConfiguredMaxSize(req: http.IncomingMessage, id: string | null) { - if (typeof this.options.maxSize === 'function') { - return this.options.maxSize(req, id) - } - return this.options.maxSize ?? 0 - } + /** + * 计算上传请求体的最大允许大小。 + * 该函数考虑了服务器配置的最大大小和上传的具体情况,例如大小是延迟的还是固定的。 + * @param req - HTTP 请求对象。 + * @param file - 上传对象。 + * @param configuredMaxSize - 配置的最大大小。 + * @returns 返回计算出的最大请求体大小。 + * @throws 如果上传大小超过允许的最大大小,则抛出 ERRORS.ERR_SIZE_EXCEEDED 错误。 + */ + async calculateMaxBodySize(req: http.IncomingMessage, file: Upload, configuredMaxSize?: number) { + // 如果未明确提供,则使用服务器配置的最大大小。 + configuredMaxSize ??= await this.getConfiguredMaxSize(req, file.id); - /** - * 计算上传请求体的最大允许大小。 - * 该函数考虑了服务器配置的最大大小和上传的具体情况,例如大小是延迟的还是固定的。 - * @param req - HTTP 请求对象。 - * @param file - 上传对象。 - * @param configuredMaxSize - 配置的最大大小。 - * @returns 返回计算出的最大请求体大小。 - * @throws 如果上传大小超过允许的最大大小,则抛出 ERRORS.ERR_SIZE_EXCEEDED 错误。 - */ - async calculateMaxBodySize( - req: http.IncomingMessage, - file: Upload, - configuredMaxSize?: number - ) { - // 如果未明确提供,则使用服务器配置的最大大小。 - configuredMaxSize ??= await this.getConfiguredMaxSize(req, file.id) + // 从请求中解析 Content-Length 头(如果未设置,则默认为 0)。 + const length = Number.parseInt(req.headers['content-length'] || '0', 10); + const offset = file.offset; - // 从请求中解析 Content-Length 头(如果未设置,则默认为 0)。 - const length = Number.parseInt(req.headers['content-length'] || '0', 10) - const offset = file.offset + const hasContentLengthSet = req.headers['content-length'] !== undefined; + const hasConfiguredMaxSizeSet = configuredMaxSize > 0; - const hasContentLengthSet = req.headers['content-length'] !== undefined - const hasConfiguredMaxSizeSet = configuredMaxSize > 0 + if (file.sizeIsDeferred) { + // 对于延迟大小的上传,如果不是分块传输,则检查配置的最大大小。 + if (hasContentLengthSet && hasConfiguredMaxSizeSet && offset + length > configuredMaxSize) { + throw ERRORS.ERR_SIZE_EXCEEDED; + } - if (file.sizeIsDeferred) { - // 对于延迟大小的上传,如果不是分块传输,则检查配置的最大大小。 - if ( - hasContentLengthSet && - hasConfiguredMaxSizeSet && - offset + length > configuredMaxSize - ) { - throw ERRORS.ERR_SIZE_EXCEEDED - } + if (hasConfiguredMaxSizeSet) { + return configuredMaxSize - offset; + } + return Number.MAX_SAFE_INTEGER; + } - if (hasConfiguredMaxSizeSet) { - return configuredMaxSize - offset - } - return Number.MAX_SAFE_INTEGER - } + // 检查上传是否适合文件的大小(当大小不是延迟的时)。 + if (offset + length > (file.size || 0)) { + throw ERRORS.ERR_SIZE_EXCEEDED; + } - // 检查上传是否适合文件的大小(当大小不是延迟的时)。 - if (offset + length > (file.size || 0)) { - throw ERRORS.ERR_SIZE_EXCEEDED - } + if (hasContentLengthSet) { + return length; + } - if (hasContentLengthSet) { - return length - } - - return (file.size || 0) - offset - } -} \ No newline at end of file + return (file.size || 0) - offset; + } +} diff --git a/packages/tus/src/server.ts b/packages/tus/src/server.ts index ab1aeaf..4c80e52 100644 --- a/packages/tus/src/server.ts +++ b/packages/tus/src/server.ts @@ -1,16 +1,16 @@ -import http from "node:http"; -import { EventEmitter } from "node:events"; -import debug from "debug"; -import { GetHandler } from "./handlers/GetHandler"; -import { HeadHandler } from "./handlers/HeadHandler"; -import { OptionsHandler } from "./handlers/OptionsHandler"; -import { PatchHandler } from "./handlers/PatchHandler"; -import { PostHandler } from "./handlers/PostHandler"; -import { DeleteHandler } from "./handlers/DeleteHandler"; -import { validateHeader } from "./validators/HeaderValidator"; -import type stream from "node:stream"; -import type { ServerOptions, RouteHandler, WithOptional } from "./types"; -import { MemoryLocker } from "./lockers"; +import http from 'node:http'; +import { EventEmitter } from 'node:events'; +import debug from 'debug'; +import { GetHandler } from './handlers/GetHandler'; +import { HeadHandler } from './handlers/HeadHandler'; +import { OptionsHandler } from './handlers/OptionsHandler'; +import { PatchHandler } from './handlers/PatchHandler'; +import { PostHandler } from './handlers/PostHandler'; +import { DeleteHandler } from './handlers/DeleteHandler'; +import { validateHeader } from './validators/HeaderValidator'; +import type stream from 'node:stream'; +import type { ServerOptions, RouteHandler, WithOptional } from './types'; +import { MemoryLocker } from './lockers'; import { EVENTS, Upload, @@ -20,7 +20,7 @@ import { TUS_RESUMABLE, EXPOSED_HEADERS, CancellationContext, -} from "./utils"; +} from './utils'; /** * 处理器类型映射 @@ -47,32 +47,20 @@ interface TusEvents { * @param upload 上传对象实例 * @param url 生成的文件URL */ - [EVENTS.POST_CREATE]: ( - req: http.IncomingMessage, - res: http.ServerResponse, - upload: Upload, - url: string - ) => void; + [EVENTS.POST_CREATE]: (req: http.IncomingMessage, res: http.ServerResponse, upload: Upload, url: string) => void; /** * @deprecated 文件接收事件(已废弃) * 建议使用 POST_RECEIVE_V2 替代 */ - [EVENTS.POST_RECEIVE]: ( - req: http.IncomingMessage, - res: http.ServerResponse, - upload: Upload - ) => void; + [EVENTS.POST_RECEIVE]: (req: http.IncomingMessage, res: http.ServerResponse, upload: Upload) => void; /** * 文件接收事件V2版本 * @param req HTTP请求对象 * @param upload 上传对象实例 */ - [EVENTS.POST_RECEIVE_V2]: ( - req: http.IncomingMessage, - upload: Upload - ) => void; + [EVENTS.POST_RECEIVE_V2]: (req: http.IncomingMessage, upload: Upload) => void; /** * 文件上传完成事件 @@ -80,11 +68,7 @@ interface TusEvents { * @param res HTTP响应对象 * @param upload 上传对象实例 */ - [EVENTS.POST_FINISH]: ( - req: http.IncomingMessage, - res: http.ServerResponse, - upload: Upload - ) => void; + [EVENTS.POST_FINISH]: (req: http.IncomingMessage, res: http.ServerResponse, upload: Upload) => void; /** * 文件终止上传事件 @@ -92,18 +76,14 @@ interface TusEvents { * @param res HTTP响应对象 * @param id 文件唯一标识符 */ - [EVENTS.POST_TERMINATE]: ( - req: http.IncomingMessage, - res: http.ServerResponse, - id: string - ) => void; + [EVENTS.POST_TERMINATE]: (req: http.IncomingMessage, res: http.ServerResponse, id: string) => void; } /** * EventEmitter事件处理器类型别名 */ -type on = EventEmitter["on"]; -type emit = EventEmitter["emit"]; +type on = EventEmitter['on']; +type emit = EventEmitter['emit']; /** * TUS服务器接口声明 @@ -116,10 +96,7 @@ export declare interface Server { * @param listener 事件触发时执行的回调函数 * @returns 返回Server实例以支持链式调用 */ - on( - event: Event, - listener: TusEvents[Event] - ): this; + on(event: Event, listener: TusEvents[Event]): this; /** * 为指定事件注册监听器(通用版本) * @param eventName 事件名称 @@ -133,25 +110,19 @@ export declare interface Server { * @param listener 事件触发时执行的回调函数 * @returns 返回emit函数的返回值 */ - emit( - event: Event, - listener: TusEvents[Event] - ): ReturnType; + emit(event: Event, listener: TusEvents[Event]): ReturnType; /** * 触发指定事件(通用版本) * @param eventName 事件名称 * @param listener 事件触发时执行的回调函数 * @returns 返回emit函数的返回值 */ - emit( - eventName: Parameters[0], - listener: Parameters[1] - ): ReturnType; + emit(eventName: Parameters[0], listener: Parameters[1]): ReturnType; } /** * 调试日志工具实例 */ -const log = debug("tus-node-server"); +const log = debug('tus-node-server'); // biome-ignore lint/suspicious/noUnsafeDeclarationMerging: it's fine export class Server extends EventEmitter { @@ -165,9 +136,9 @@ export class Server extends EventEmitter { * @throws 如果未提供 options、path 或 datastore,将抛出错误 */ constructor( - options: WithOptional & { + options: WithOptional & { datastore: DataStore; - } + }, ) { super(); @@ -180,9 +151,7 @@ export class Server extends EventEmitter { } if (!options.datastore) { - throw new Error( - "'datastore' is not defined; must have a datastore" - ); + throw new Error("'datastore' is not defined; must have a datastore"); } if (!options.locker) { @@ -214,14 +183,14 @@ export class Server extends EventEmitter { // 当数据存储分配给服务器时,它们会被设置/重置。 // 从服务器中移除任何事件监听器时,必须先从每个处理器中移除监听器。 // 这必须在添加 'newListener' 监听器之前完成,以避免为所有请求处理器添加 'removeListener' 事件监听器。 - this.on("removeListener", (event: string, listener) => { + this.on('removeListener', (event: string, listener) => { this.datastore.removeListener(event, listener); for (const method of REQUEST_METHODS) { this.handlers[method].removeListener(event, listener); } }); // 当事件监听器被添加到服务器时,确保它们从请求处理器冒泡到服务器级别。 - this.on("newListener", (event: string, listener) => { + this.on('newListener', (event: string, listener) => { this.datastore.on(event, listener); for (const method of REQUEST_METHODS) { this.handlers[method].on(event, listener); @@ -246,33 +215,20 @@ export class Server extends EventEmitter { */ async handle( req: http.IncomingMessage, - res: http.ServerResponse + res: http.ServerResponse, // biome-ignore lint/suspicious/noConfusingVoidType: it's fine ): Promise { const context = this.createContext(req); log(`[TusServer] handle: ${req.method} ${req.url}`); // 允许覆盖 HTTP 方法。这样做的原因是某些库/环境不支持 PATCH 和 DELETE 请求,例如浏览器中的 Flash 和 Java 部分环境 - if (req.headers["x-http-method-override"]) { - req.method = ( - req.headers["x-http-method-override"] as string - ).toUpperCase(); + if (req.headers['x-http-method-override']) { + req.method = (req.headers['x-http-method-override'] as string).toUpperCase(); } - const onError = async (error: { - status_code?: number; - body?: string; - message: string; - }) => { - let status_code = - error.status_code || ERRORS.UNKNOWN_ERROR.status_code; - let body = - error.body || - `${ERRORS.UNKNOWN_ERROR.body}${error.message || ""}\n`; + const onError = async (error: { status_code?: number; body?: string; message: string }) => { + let status_code = error.status_code || ERRORS.UNKNOWN_ERROR.status_code; + let body = error.body || `${ERRORS.UNKNOWN_ERROR.body}${error.message || ''}\n`; if (this.options.onResponseError) { - const errorMapping = await this.options.onResponseError( - req, - res, - error as Error - ); + const errorMapping = await this.options.onResponseError(req, res, error as Error); if (errorMapping) { status_code = errorMapping.status_code; body = errorMapping.body; @@ -280,67 +236,42 @@ export class Server extends EventEmitter { } return this.write(context, req, res, status_code, body); }; - if (req.method === "GET") { + if (req.method === 'GET') { const handler = this.handlers.GET; return handler.send(req, res).catch(onError); } // Tus-Resumable 头部必须包含在每个请求和响应中,除了 OPTIONS 请求。其值必须是客户端或服务器使用的协议版本。 - res.setHeader("Tus-Resumable", TUS_RESUMABLE); - if ( - req.method !== "OPTIONS" && - req.headers["tus-resumable"] === undefined - ) { - return this.write( - context, - req, - res, - 412, - "Tus-Resumable Required\n" - ); + res.setHeader('Tus-Resumable', TUS_RESUMABLE); + if (req.method !== 'OPTIONS' && req.headers['tus-resumable'] === undefined) { + return this.write(context, req, res, 412, 'Tus-Resumable Required\n'); } // 验证所有必需的头部以符合 tus 协议 - const invalid_headers = []; + const invalid_headers: string[] = []; for (const header_name in req.headers) { - if (req.method === "OPTIONS") { + if (req.method === 'OPTIONS') { continue; } // 内容类型仅对 PATCH 请求进行检查。对于所有其他请求方法,它将被忽略并视为未设置内容类型, // 因为某些 HTTP 客户端可能会为此头部强制执行默认值。 // 参见 https://github.com/tus/tus-node-server/pull/116 - if ( - header_name.toLowerCase() === "content-type" && - req.method !== "PATCH" - ) { + if (header_name.toLowerCase() === 'content-type' && req.method !== 'PATCH') { continue; } - if ( - !validateHeader( - header_name, - req.headers[header_name] as string | undefined - ) - ) { - log( - `Invalid ${header_name} header: ${req.headers[header_name]}` - ); + if (!validateHeader(header_name, req.headers[header_name] as string | undefined)) { + log(`Invalid ${header_name} header: ${req.headers[header_name]}`); invalid_headers.push(header_name); } } if (invalid_headers.length > 0) { - return this.write( - context, - req, - res, - 400, - `Invalid ${invalid_headers.join(" ")}\n` - ); + return this.write(context, req, res, 400, `Invalid ${invalid_headers.join(' ')}\n`); } // 启用 CORS - res.setHeader("Access-Control-Allow-Origin", this.getCorsOrigin(req)); - res.setHeader("Access-Control-Expose-Headers", EXPOSED_HEADERS); + res.setHeader('Access-Control-Allow-Origin', this.getCorsOrigin(req)); + res.setHeader('Access-Control-Expose-Headers', EXPOSED_HEADERS); if (this.options.allowedCredentials === true) { - res.setHeader("Access-Control-Allow-Credentials", "true"); + res.setHeader('Access-Control-Allow-Credentials', 'true'); } // 调用请求方法的处理器 @@ -349,7 +280,7 @@ export class Server extends EventEmitter { return handler.send(req, res, context).catch(onError); } - return this.write(context, req, res, 404, "Not found\n"); + return this.write(context, req, res, 404, 'Not found\n'); } /** @@ -369,24 +300,18 @@ export class Server extends EventEmitter { private getCorsOrigin(req: http.IncomingMessage): string { const origin = req.headers.origin; // 检查请求头中的`origin`是否在允许的源列表中 - const isOriginAllowed = - this.options.allowedOrigins?.some( - (allowedOrigin) => allowedOrigin === origin - ) ?? true; + const isOriginAllowed = this.options.allowedOrigins?.some((allowedOrigin) => allowedOrigin === origin) ?? true; // 如果`origin`存在且在允许的源列表中,则返回该`origin` if (origin && isOriginAllowed) { return origin; } // 如果允许的源列表不为空,则返回列表中的第一个源地址 - if ( - this.options.allowedOrigins && - this.options.allowedOrigins.length > 0 - ) { - return this.options.allowedOrigins[0]; + if (this.options.allowedOrigins && this.options.allowedOrigins.length > 0) { + return this.options.allowedOrigins[0]!; } // 如果允许的源列表为空,则返回通配符`*`,表示允许所有源地址 - return "*"; + return '*'; } /** @@ -404,14 +329,13 @@ export class Server extends EventEmitter { req: http.IncomingMessage, res: http.ServerResponse, status: number, - body = "", - headers = {} + body = '', + headers: Record = {}, ) { const isAborted = context.signal.aborted; if (status !== 204) { - // @ts-expect-error not explicitly typed but possible - headers["Content-Length"] = Buffer.byteLength(body, "utf8"); + (headers as any)['Content-Length'] = Buffer.byteLength(body, 'utf8'); } if (isAborted) { @@ -420,14 +344,13 @@ export class Server extends EventEmitter { // 这是通过在响应中设置 'Connection' 头部为 'close' 来传达的。 // 这一步对于防止服务器继续处理不再需要的请求至关重要,从而节省资源。 - // @ts-expect-error not explicitly typed but possible - headers.Connection = "close"; + (headers as any).Connection = 'close'; // 为响应 ('res') 添加 'finish' 事件的事件监听器。 // 'finish' 事件在响应已发送给客户端时触发。 // 一旦响应完成,请求 ('req') 对象将被销毁。 // 销毁请求对象是释放与此请求相关的任何资源的关键步骤,因为它已经被中止。 - res.on("finish", () => { + res.on('finish', () => { req.destroy(); }); } @@ -453,7 +376,7 @@ export class Server extends EventEmitter { * @throws 如果数据存储不支持过期扩展,将抛出错误 */ cleanUpExpiredUploads(): Promise { - if (!this.datastore.hasExtension("expiration")) { + if (!this.datastore.hasExtension('expiration')) { throw ERRORS.UNSUPPORTED_EXPIRATION_EXTENSION; } @@ -475,25 +398,16 @@ export class Server extends EventEmitter { // 当 `abortWithDelayController` 被触发时调用此函数,以在指定延迟后中止请求。 const onDelayedAbort = (err: unknown) => { - abortWithDelayController.signal.removeEventListener( - "abort", - onDelayedAbort - ); + abortWithDelayController.signal.removeEventListener('abort', onDelayedAbort); setTimeout(() => { requestAbortController.abort(err); }, this.options.lockDrainTimeout); }; - abortWithDelayController.signal.addEventListener( - "abort", - onDelayedAbort - ); + abortWithDelayController.signal.addEventListener('abort', onDelayedAbort); // 当请求关闭时,移除监听器以避免内存泄漏。 - req.on("close", () => { - abortWithDelayController.signal.removeEventListener( - "abort", - onDelayedAbort - ); + req.on('close', () => { + abortWithDelayController.signal.removeEventListener('abort', onDelayedAbort); }); // 返回一个对象,包含信号和两个中止请求的方法。 diff --git a/packages/tus/src/store/s3-store/index.ts b/packages/tus/src/store/s3-store/index.ts index 2b58506..15cbef3 100644 --- a/packages/tus/src/store/s3-store/index.ts +++ b/packages/tus/src/store/s3-store/index.ts @@ -1,51 +1,52 @@ -import os from 'node:os' -import fs, { promises as fsProm } from 'node:fs' -import stream, { promises as streamProm } from 'node:stream' -import type { Readable } from 'node:stream' +import os from 'node:os'; +import fs, { promises as fsProm } from 'node:fs'; +import stream, { promises as streamProm } from 'node:stream'; +import type { Readable } from 'node:stream'; -import type AWS from '@aws-sdk/client-s3' -import { NoSuchKey, NotFound, S3, type S3ClientConfig } from '@aws-sdk/client-s3' -import debug from 'debug' +import type AWS from '@aws-sdk/client-s3'; +import { NoSuchKey, NotFound, S3, type S3ClientConfig } from '@aws-sdk/client-s3'; +import debug from 'debug'; import { - DataStore, - StreamSplitter, - Upload, - ERRORS, - TUS_RESUMABLE, - type KvStore, - MemoryKvStore, -} from '../../utils' + DataStore, + StreamSplitter, + Upload, + ERRORS, + TUS_RESUMABLE, + type KvStore, + MemoryKvStore, + type ChunkInfo, +} from '../../utils'; -import { Semaphore, type Permit } from '@shopify/semaphore' -import MultiStream from 'multistream' -import crypto from 'node:crypto' -import path from 'node:path' +import { Semaphore, type Permit } from '@shopify/semaphore'; +import MultiStream from 'multistream'; +import crypto from 'node:crypto'; +import path from 'node:path'; -const log = debug('tus-node-server:stores:s3store') +const log = debug('tus-node-server:stores:s3store'); type Options = { - // The preferred part size for parts send to S3. Can not be lower than 5MiB or more than 5GiB. - // The server calculates the optimal part size, which takes this size into account, - // but may increase it to not exceed the S3 10K parts limit. - partSize?: number - useTags?: boolean - maxConcurrentPartUploads?: number - cache?: KvStore - expirationPeriodInMilliseconds?: number - // Options to pass to the AWS S3 SDK. - s3ClientConfig: S3ClientConfig & { bucket: string } -} + // The preferred part size for parts send to S3. Can not be lower than 5MiB or more than 5GiB. + // The server calculates the optimal part size, which takes this size into account, + // but may increase it to not exceed the S3 10K parts limit. + partSize?: number; + useTags?: boolean; + maxConcurrentPartUploads?: number; + cache?: KvStore; + expirationPeriodInMilliseconds?: number; + // Options to pass to the AWS S3 SDK. + s3ClientConfig: S3ClientConfig & { bucket: string }; +}; export type MetadataValue = { - file: Upload - 'upload-id': string - 'tus-version': string -} + file: Upload; + 'upload-id': string; + 'tus-version': string; +}; function calcOffsetFromParts(parts?: Array) { - // @ts-expect-error not undefined - return parts && parts.length > 0 ? parts.reduce((a, b) => a + b.Size, 0) : 0 + // @ts-expect-error not undefined + return parts && parts.length > 0 ? parts.reduce((a, b) => a + b.Size, 0) : 0; } // Implementation (based on https://github.com/tus/tusd/blob/master/s3store/s3store.go) @@ -82,722 +83,793 @@ function calcOffsetFromParts(parts?: Array) { // For each incoming PATCH request (a call to `write`), a new part is uploaded // to S3. export class S3Store extends DataStore { - private bucket: string - private cache: KvStore - private client: S3 - private preferredPartSize: number - private expirationPeriodInMilliseconds = 0 - private useTags = true - private partUploadSemaphore: Semaphore - public maxMultipartParts = 10_000 as const - public minPartSize = 5_242_880 as const // 5MiB - public maxUploadSize = 5_497_558_138_880 as const // 5TiB - - constructor(options: Options) { - super() - const { partSize, s3ClientConfig } = options - const { bucket, ...restS3ClientConfig } = s3ClientConfig - this.extensions = [ - 'creation', - 'creation-with-upload', - 'creation-defer-length', - 'termination', - 'expiration', - ] - this.bucket = bucket - this.preferredPartSize = partSize || 8 * 1024 * 1024 - this.expirationPeriodInMilliseconds = options.expirationPeriodInMilliseconds ?? 0 - this.useTags = options.useTags ?? true - this.cache = options.cache ?? new MemoryKvStore() - this.client = new S3(restS3ClientConfig) - this.partUploadSemaphore = new Semaphore(options.maxConcurrentPartUploads ?? 60) - } - - protected shouldUseExpirationTags() { - return this.expirationPeriodInMilliseconds !== 0 && this.useTags - } - - protected useCompleteTag(value: 'true' | 'false') { - if (!this.shouldUseExpirationTags()) { - return undefined - } - - return `Tus-Completed=${value}` - } - - /** - * Saves upload metadata to a `${file_id}.info` file on S3. - * Please note that the file is empty and the metadata is saved - * on the S3 object's `Metadata` field, so that only a `headObject` - * is necessary to retrieve the data. - */ - private async saveMetadata(upload: Upload, uploadId: string) { - log(`[${upload.id}] saving metadata`) - await this.client.putObject({ - Bucket: this.bucket, - Key: this.infoKey(upload.id), - Body: JSON.stringify(upload), - Tagging: this.useCompleteTag('false'), - Metadata: { - 'upload-id': uploadId, - 'tus-version': TUS_RESUMABLE, - }, - }) - log(`[${upload.id}] metadata file saved`) - } - - private async completeMetadata(upload: Upload) { - if (!this.shouldUseExpirationTags()) { - return - } - - const { 'upload-id': uploadId } = await this.getMetadata(upload.id) - await this.client.putObject({ - Bucket: this.bucket, - Key: this.infoKey(upload.id), - Body: JSON.stringify(upload), - Tagging: this.useCompleteTag('true'), - Metadata: { - 'upload-id': uploadId, - 'tus-version': TUS_RESUMABLE, - }, - }) - } - - /** - * Retrieves upload metadata previously saved in `${file_id}.info`. - * There's a small and simple caching mechanism to avoid multiple - * HTTP calls to S3. - */ - private async getMetadata(id: string): Promise { - const cached = await this.cache.get(id) - if (cached) { - return cached - } - - const { Metadata, Body } = await this.client.getObject({ - Bucket: this.bucket, - Key: this.infoKey(id), - }) - const file = JSON.parse((await Body?.transformToString()) as string) - const metadata: MetadataValue = { - 'tus-version': Metadata?.['tus-version'] as string, - 'upload-id': Metadata?.['upload-id'] as string, - file: new Upload({ - id, - size: file.size ? Number.parseInt(file.size, 10) : undefined, - offset: Number.parseInt(file.offset, 10), - metadata: file.metadata, - creation_date: file.creation_date, - storage: file.storage, - }), - } - await this.cache.set(id, metadata) - return metadata - } - - private infoKey(id: string) { - return `${id}.info` - } - - private partKey(id: string, isIncomplete = false) { - if (isIncomplete) { - id += '.part' - } - - // TODO: introduce ObjectPrefixing for parts and incomplete parts. - // ObjectPrefix is prepended to the name of each S3 object that is created - // to store uploaded files. It can be used to create a pseudo-directory - // structure in the bucket, e.g. "path/to/my/uploads". - return id - } - - private async uploadPart( - metadata: MetadataValue, - readStream: fs.ReadStream | Readable, - partNumber: number - ): Promise { - const data = await this.client.uploadPart({ - Bucket: this.bucket, - Key: metadata.file.id, - UploadId: metadata['upload-id'], - PartNumber: partNumber, - Body: readStream, - }) - log(`[${metadata.file.id}] finished uploading part #${partNumber}`) - return data.ETag as string - } - - private async uploadIncompletePart( - id: string, - readStream: fs.ReadStream | Readable - ): Promise { - const data = await this.client.putObject({ - Bucket: this.bucket, - Key: this.partKey(id, true), - Body: readStream, - Tagging: this.useCompleteTag('false'), - }) - log(`[${id}] finished uploading incomplete part`) - return data.ETag as string - } - - private async downloadIncompletePart(id: string) { - const incompletePart = await this.getIncompletePart(id) - - if (!incompletePart) { - return - } - const filePath = await this.uniqueTmpFileName('tus-s3-incomplete-part-') - - try { - let incompletePartSize = 0 - - const byteCounterTransform = new stream.Transform({ - transform(chunk, _, callback) { - incompletePartSize += chunk.length - callback(null, chunk) - }, - }) - - // write to temporary file - await streamProm.pipeline( - incompletePart, - byteCounterTransform, - fs.createWriteStream(filePath) - ) - - const createReadStream = (options: { cleanUpOnEnd: boolean }) => { - const fileReader = fs.createReadStream(filePath) - - if (options.cleanUpOnEnd) { - fileReader.on('end', () => { - fs.unlink(filePath, () => { - // ignore - }) - }) - - fileReader.on('error', (err) => { - fileReader.destroy(err) - fs.unlink(filePath, () => { - // ignore - }) - }) - } - - return fileReader - } - - return { - size: incompletePartSize, - path: filePath, - createReader: createReadStream, - } - } catch (err) { - fsProm.rm(filePath).catch(() => { - /* ignore */ - }) - throw err - } - } - - private async getIncompletePart(id: string): Promise { - try { - const data = await this.client.getObject({ - Bucket: this.bucket, - Key: this.partKey(id, true), - }) - return data.Body as Readable - } catch (error) { - if (error instanceof NoSuchKey) { - return undefined - } - - throw error - } - } - - private async getIncompletePartSize(id: string): Promise { - try { - const data = await this.client.headObject({ - Bucket: this.bucket, - Key: this.partKey(id, true), - }) - return data.ContentLength - } catch (error) { - if (error instanceof NotFound) { - return undefined - } - throw error - } - } - - private async deleteIncompletePart(id: string): Promise { - await this.client.deleteObject({ - Bucket: this.bucket, - Key: this.partKey(id, true), - }) - } - - /** - * Uploads a stream to s3 using multiple parts - */ - private async uploadParts( - metadata: MetadataValue, - readStream: stream.Readable, - currentPartNumber: number, - offset: number - ): Promise { - const size = metadata.file.size - const promises: Promise[] = [] - let pendingChunkFilepath: string | null = null - let bytesUploaded = 0 - let permit: Permit | undefined = undefined - - const splitterStream = new StreamSplitter({ - chunkSize: this.calcOptimalPartSize(size), - directory: os.tmpdir(), - }) - .on('beforeChunkStarted', async () => { - permit = await this.partUploadSemaphore.acquire() - }) - .on('chunkStarted', (filepath) => { - pendingChunkFilepath = filepath - }) - .on('chunkFinished', ({ path, size: partSize }) => { - pendingChunkFilepath = null - - const acquiredPermit = permit - const partNumber = currentPartNumber++ - - offset += partSize - - const isFinalPart = size === offset - - // biome-ignore lint/suspicious/noAsyncPromiseExecutor: it's fine - const deferred = new Promise(async (resolve, reject) => { - try { - // Only the first chunk of each PATCH request can prepend - // an incomplete part (last chunk) from the previous request. - const readable = fs.createReadStream(path) - readable.on('error', reject) - - if (partSize >= this.minPartSize || isFinalPart) { - await this.uploadPart(metadata, readable, partNumber) - } else { - await this.uploadIncompletePart(metadata.file.id, readable) - } - - bytesUploaded += partSize - resolve() - } catch (error) { - reject(error) - } finally { - fsProm.rm(path).catch(() => { - /* ignore */ - }) - acquiredPermit?.release() - } - }) - - promises.push(deferred) - }) - .on('chunkError', () => { - permit?.release() - }) - - try { - await streamProm.pipeline(readStream, splitterStream) - } catch (error) { - if (pendingChunkFilepath !== null) { - try { - await fsProm.rm(pendingChunkFilepath) - } catch { - log(`[${metadata.file.id}] failed to remove chunk ${pendingChunkFilepath}`) - } - } - - promises.push(Promise.reject(error)) - } finally { - await Promise.all(promises) - } - - return bytesUploaded - } - - /** - * Completes a multipart upload on S3. - * This is where S3 concatenates all the uploaded parts. - */ - private async finishMultipartUpload(metadata: MetadataValue, parts: Array) { - const response = await this.client.completeMultipartUpload({ - Bucket: this.bucket, - Key: metadata.file.id, - UploadId: metadata['upload-id'], - MultipartUpload: { - Parts: parts.map((part) => { - return { - ETag: part.ETag, - PartNumber: part.PartNumber, - } - }), - }, - }) - return response.Location - } - - /** - * Gets the number of complete parts/chunks already uploaded to S3. - * Retrieves only consecutive parts. - */ - private async retrieveParts( - id: string, - partNumberMarker?: string - ): Promise> { - const metadata = await this.getMetadata(id) - - const params: AWS.ListPartsCommandInput = { - Bucket: this.bucket, - Key: id, - UploadId: metadata['upload-id'], - PartNumberMarker: partNumberMarker, - } - - const data = await this.client.listParts(params) - - let parts = data.Parts ?? [] - - if (data.IsTruncated) { - const rest = await this.retrieveParts(id, data.NextPartNumberMarker) - parts = [...parts, ...rest] - } - - if (!partNumberMarker) { - // biome-ignore lint/style/noNonNullAssertion: it's fine - parts.sort((a, b) => a.PartNumber! - b.PartNumber!) - } - - return parts - } - - /** - * Removes cached data for a given file. - */ - private async clearCache(id: string) { - log(`[${id}] removing cached data`) - await this.cache.delete(id) - } - - private calcOptimalPartSize(size?: number): number { - // When upload size is not know we assume largest possible value (`maxUploadSize`) - if (size === undefined) { - size = this.maxUploadSize - } - - let optimalPartSize: number - - // When upload is smaller or equal to PreferredPartSize, we upload in just one part. - if (size <= this.preferredPartSize) { - optimalPartSize = size - } - // Does the upload fit in MaxMultipartParts parts or less with PreferredPartSize. - else if (size <= this.preferredPartSize * this.maxMultipartParts) { - optimalPartSize = this.preferredPartSize - // The upload is too big for the preferred size. - // We devide the size with the max amount of parts and round it up. - } else { - optimalPartSize = Math.ceil(size / this.maxMultipartParts) - } - - return optimalPartSize - } - - /** - * Creates a multipart upload on S3 attaching any metadata to it. - * Also, a `${file_id}.info` file is created which holds some information - * about the upload itself like: `upload-id`, `upload-length`, etc. - */ - public async create(upload: Upload) { - log(`[${upload.id}] initializing multipart upload`) - const request: AWS.CreateMultipartUploadCommandInput = { - Bucket: this.bucket, - Key: upload.id, - Metadata: { 'tus-version': TUS_RESUMABLE }, - } - - if (upload.metadata?.contentType) { - request.ContentType = upload.metadata.contentType - } - - if (upload.metadata?.cacheControl) { - request.CacheControl = upload.metadata.cacheControl - } - - upload.creation_date = new Date().toISOString() - - const res = await this.client.createMultipartUpload(request) - upload.storage = { - type: 's3', - path: res.Key as string, - bucket: this.bucket, - } - await this.saveMetadata(upload, res.UploadId as string) - log(`[${upload.id}] multipart upload created (${res.UploadId})`) - - return upload - } - - async read(id: string) { - const data = await this.client.getObject({ - Bucket: this.bucket, - Key: id, - }) - return data.Body as Readable - } - - /** - * Write to the file, starting at the provided offset - */ - public async write(src: stream.Readable, id: string, offset: number): Promise { - // Metadata request needs to happen first - const metadata = await this.getMetadata(id) - const parts = await this.retrieveParts(id) - // biome-ignore lint/style/noNonNullAssertion: it's fine - const partNumber: number = parts.length > 0 ? parts[parts.length - 1].PartNumber! : 0 - const nextPartNumber = partNumber + 1 - - const incompletePart = await this.downloadIncompletePart(id) - const requestedOffset = offset - - if (incompletePart) { - // once the file is on disk, we delete the incomplete part - await this.deleteIncompletePart(id) - - offset = requestedOffset - incompletePart.size - src = new MultiStream([incompletePart.createReader({ cleanUpOnEnd: true }), src]) - } - - const bytesUploaded = await this.uploadParts(metadata, src, nextPartNumber, offset) - - // The size of the incomplete part should not be counted, because the - // process of the incomplete part should be fully transparent to the user. - const newOffset = requestedOffset + bytesUploaded - (incompletePart?.size ?? 0) - - if (metadata.file.size === newOffset) { - try { - const parts = await this.retrieveParts(id) - await this.finishMultipartUpload(metadata, parts) - await this.completeMetadata(metadata.file) - await this.clearCache(id) - } catch (error) { - log(`[${id}] failed to finish upload`, error) - throw error - } - } - - return newOffset - } - - public async getUpload(id: string): Promise { - let metadata: MetadataValue - try { - metadata = await this.getMetadata(id) - } catch (error) { - log('getUpload: No file found.', error) - throw ERRORS.FILE_NOT_FOUND - } - - let offset = 0 - - try { - const parts = await this.retrieveParts(id) - offset = calcOffsetFromParts(parts) - } catch (error: any) { - // Check if the error is caused by the upload not being found. This happens - // when the multipart upload has already been completed or aborted. Since - // we already found the info object, we know that the upload has been - // completed and therefore can ensure the the offset is the size. - // AWS S3 returns NoSuchUpload, but other implementations, such as DigitalOcean - // Spaces, can also return NoSuchKey. - if (error.Code === 'NoSuchUpload' || error.Code === 'NoSuchKey') { - return new Upload({ - ...metadata.file, - offset: metadata.file.size as number, - size: metadata.file.size, - metadata: metadata.file.metadata, - storage: metadata.file.storage, - }) - } - - log(error) - throw error - } - - const incompletePartSize = await this.getIncompletePartSize(id) - - return new Upload({ - ...metadata.file, - offset: offset + (incompletePartSize ?? 0), - size: metadata.file.size, - storage: metadata.file.storage, - }) - } - - public async declareUploadLength(file_id: string, upload_length: number) { - const { file, 'upload-id': uploadId } = await this.getMetadata(file_id) - if (!file) { - throw ERRORS.FILE_NOT_FOUND - } - - file.size = upload_length - - await this.saveMetadata(file, uploadId) - } - - public async remove(id: string): Promise { - try { - const { 'upload-id': uploadId } = await this.getMetadata(id) - if (uploadId) { - await this.client.abortMultipartUpload({ - Bucket: this.bucket, - Key: id, - UploadId: uploadId, - }) - } - } catch (error: any) { - if (error?.code && ['NotFound', 'NoSuchKey', 'NoSuchUpload'].includes(error.Code)) { - log('remove: No file found.', error) - throw ERRORS.FILE_NOT_FOUND - } - throw error - } - - await this.client.deleteObjects({ - Bucket: this.bucket, - Delete: { - Objects: [{ Key: id }, { Key: this.infoKey(id) }], - }, - }) - - this.clearCache(id) - } - - protected getExpirationDate(created_at: string) { - const date = new Date(created_at) - - return new Date(date.getTime() + this.getExpiration()) - } - - getExpiration(): number { - return this.expirationPeriodInMilliseconds - } - - async deleteExpired(): Promise { - if (this.getExpiration() === 0) { - return 0 - } - - let keyMarker: string | undefined = undefined - let uploadIdMarker: string | undefined = undefined - let isTruncated = true - let deleted = 0 - - while (isTruncated) { - const listResponse: AWS.ListMultipartUploadsCommandOutput = - await this.client.listMultipartUploads({ - Bucket: this.bucket, - KeyMarker: keyMarker, - UploadIdMarker: uploadIdMarker, - }) - - const expiredUploads = - listResponse.Uploads?.filter((multiPartUpload) => { - const initiatedDate = multiPartUpload.Initiated - return ( - initiatedDate && - new Date().getTime() > - this.getExpirationDate(initiatedDate.toISOString()).getTime() - ) - }) || [] - - const objectsToDelete = expiredUploads.reduce( - (all, expiredUpload) => { - all.push( - { - key: this.infoKey(expiredUpload.Key as string), - }, - { - key: this.partKey(expiredUpload.Key as string, true), - } - ) - return all - }, - [] as { key: string }[] - ) - - const deletions: Promise[] = [] - - // Batch delete 1000 items at a time - while (objectsToDelete.length > 0) { - const objects = objectsToDelete.splice(0, 1000) - deletions.push( - this.client.deleteObjects({ - Bucket: this.bucket, - Delete: { - Objects: objects.map((object) => ({ - Key: object.key, - })), - }, - }) - ) - } - - const [objectsDeleted] = await Promise.all([ - Promise.all(deletions), - ...expiredUploads.map((expiredUpload) => { - return this.client.abortMultipartUpload({ - Bucket: this.bucket, - Key: expiredUpload.Key, - UploadId: expiredUpload.UploadId, - }) - }), - ]) - - deleted += objectsDeleted.reduce((all, acc) => all + (acc.Deleted?.length ?? 0), 0) - - isTruncated = Boolean(listResponse.IsTruncated) - - if (isTruncated) { - keyMarker = listResponse.NextKeyMarker - uploadIdMarker = listResponse.NextUploadIdMarker - } - } - - return deleted - } - - private async uniqueTmpFileName(template: string): Promise { - let tries = 0 - const maxTries = 10 - - while (tries < maxTries) { - const fileName = - template + crypto.randomBytes(10).toString('base64url').slice(0, 10) - const filePath = path.join(os.tmpdir(), fileName) - - try { - await fsProm.lstat(filePath) - // If no error, file exists, so try again - tries++ - } catch (e: any) { - if (e.code === 'ENOENT') { - // File does not exist, return the path - return filePath - } - throw e // For other errors, rethrow - } - } - - throw new Error(`Could not find a unique file name after ${maxTries} tries`) - } -} \ No newline at end of file + private bucket: string; + private cache: KvStore; + private client: S3; + private preferredPartSize: number; + private expirationPeriodInMilliseconds = 0; + private useTags = true; + private partUploadSemaphore: Semaphore; + public maxMultipartParts = 10_000 as const; + public minPartSize = 5_242_880 as const; // 5MiB + public maxUploadSize = 5_497_558_138_880 as const; // 5TiB + + constructor(options: Options) { + super(); + const { partSize, s3ClientConfig } = options; + const { bucket, ...restS3ClientConfig } = s3ClientConfig; + this.extensions = ['creation', 'creation-with-upload', 'creation-defer-length', 'termination', 'expiration']; + this.bucket = bucket; + this.preferredPartSize = partSize || 8 * 1024 * 1024; + this.expirationPeriodInMilliseconds = options.expirationPeriodInMilliseconds ?? 0; + this.useTags = options.useTags ?? true; + this.cache = options.cache ?? new MemoryKvStore(); + this.client = new S3(restS3ClientConfig); + this.partUploadSemaphore = new Semaphore(options.maxConcurrentPartUploads ?? 60); + } + + protected shouldUseExpirationTags() { + return this.expirationPeriodInMilliseconds !== 0 && this.useTags; + } + + protected useCompleteTag(value: 'true' | 'false') { + if (!this.shouldUseExpirationTags()) { + return undefined; + } + + return `Tus-Completed=${value}`; + } + + /** + * Saves upload metadata to a `${file_id}.info` file on S3. + * Please note that the file is empty and the metadata is saved + * on the S3 object's `Metadata` field, so that only a `headObject` + * is necessary to retrieve the data. + */ + private async saveMetadata(upload: Upload, uploadId: string) { + log(`[${upload.id}] saving metadata`); + console.log(`[S3Store] Saving metadata for upload ${upload.id}, uploadId: ${uploadId}`); + try { + await this.client.putObject({ + Bucket: this.bucket, + Key: this.infoKey(upload.id), + Body: JSON.stringify(upload), + Tagging: this.useCompleteTag('false'), + Metadata: { + 'upload-id': uploadId, + 'tus-version': TUS_RESUMABLE, + }, + }); + log(`[${upload.id}] metadata file saved`); + console.log(`[S3Store] Metadata saved successfully for upload ${upload.id}`); + } catch (error) { + console.error(`[S3Store] Failed to save metadata for upload ${upload.id}:`, error); + throw error; + } + } + + private async completeMetadata(upload: Upload) { + if (!this.shouldUseExpirationTags()) { + return; + } + + const { 'upload-id': uploadId } = await this.getMetadata(upload.id); + await this.client.putObject({ + Bucket: this.bucket, + Key: this.infoKey(upload.id), + Body: JSON.stringify(upload), + Tagging: this.useCompleteTag('true'), + Metadata: { + 'upload-id': uploadId, + 'tus-version': TUS_RESUMABLE, + }, + }); + } + + /** + * Retrieves upload metadata previously saved in `${file_id}.info`. + * There's a small and simple caching mechanism to avoid multiple + * HTTP calls to S3. + */ + private async getMetadata(id: string): Promise { + const cached = await this.cache.get(id); + if (cached) { + return cached; + } + + const { Metadata, Body } = await this.client.getObject({ + Bucket: this.bucket, + Key: this.infoKey(id), + }); + const file = JSON.parse((await Body?.transformToString()) as string); + const metadata: MetadataValue = { + 'tus-version': Metadata?.['tus-version'] as string, + 'upload-id': Metadata?.['upload-id'] as string, + file: new Upload({ + id, + size: file.size ? Number.parseInt(file.size, 10) : undefined, + offset: Number.parseInt(file.offset, 10), + metadata: file.metadata, + creation_date: file.creation_date, + storage: file.storage, + }), + }; + await this.cache.set(id, metadata); + return metadata; + } + + private infoKey(id: string) { + return `${id}.info`; + } + + private partKey(id: string, isIncomplete = false) { + if (isIncomplete) { + id += '.part'; + } + + // TODO: introduce ObjectPrefixing for parts and incomplete parts. + // ObjectPrefix is prepended to the name of each S3 object that is created + // to store uploaded files. It can be used to create a pseudo-directory + // structure in the bucket, e.g. "path/to/my/uploads". + return id; + } + + private async uploadPart( + metadata: MetadataValue, + readStream: fs.ReadStream | Readable, + partNumber: number, + ): Promise { + console.log(`[S3Store] Starting upload part #${partNumber} for ${metadata.file.id}`); + try { + const data = await this.client.uploadPart({ + Bucket: this.bucket, + Key: metadata.file.id, + UploadId: metadata['upload-id'], + PartNumber: partNumber, + Body: readStream, + }); + log(`[${metadata.file.id}] finished uploading part #${partNumber}`); + console.log(`[S3Store] Successfully uploaded part #${partNumber} for ${metadata.file.id}, ETag: ${data.ETag}`); + return data.ETag as string; + } catch (error) { + console.error(`[S3Store] Failed to upload part #${partNumber} for ${metadata.file.id}:`, error); + throw error; + } + } + + private async uploadIncompletePart(id: string, readStream: fs.ReadStream | Readable): Promise { + console.log(`[S3Store] Starting upload incomplete part for ${id}`); + try { + const data = await this.client.putObject({ + Bucket: this.bucket, + Key: this.partKey(id, true), + Body: readStream, + Tagging: this.useCompleteTag('false'), + }); + log(`[${id}] finished uploading incomplete part`); + console.log(`[S3Store] Successfully uploaded incomplete part for ${id}, ETag: ${data.ETag}`); + return data.ETag as string; + } catch (error) { + console.error(`[S3Store] Failed to upload incomplete part for ${id}:`, error); + throw error; + } + } + + private async downloadIncompletePart(id: string) { + const incompletePart = await this.getIncompletePart(id); + + if (!incompletePart) { + return; + } + const filePath = await this.uniqueTmpFileName('tus-s3-incomplete-part-'); + + try { + let incompletePartSize = 0; + + const byteCounterTransform = new stream.Transform({ + transform(chunk, _, callback) { + incompletePartSize += chunk.length; + callback(null, chunk); + }, + }); + + // write to temporary file + await streamProm.pipeline(incompletePart, byteCounterTransform, fs.createWriteStream(filePath)); + + const createReadStream = (options: { cleanUpOnEnd: boolean }) => { + const fileReader = fs.createReadStream(filePath); + + if (options.cleanUpOnEnd) { + fileReader.on('end', () => { + fs.unlink(filePath, () => { + // ignore + }); + }); + + fileReader.on('error', (err) => { + fileReader.destroy(err); + fs.unlink(filePath, () => { + // ignore + }); + }); + } + + return fileReader; + }; + + return { + size: incompletePartSize, + path: filePath, + createReader: createReadStream, + }; + } catch (err) { + fsProm.rm(filePath).catch(() => { + /* ignore */ + }); + throw err; + } + } + + private async getIncompletePart(id: string): Promise { + try { + const data = await this.client.getObject({ + Bucket: this.bucket, + Key: this.partKey(id, true), + }); + return data.Body as Readable; + } catch (error) { + if (error instanceof NoSuchKey) { + return undefined; + } + + throw error; + } + } + + private async getIncompletePartSize(id: string): Promise { + try { + const data = await this.client.headObject({ + Bucket: this.bucket, + Key: this.partKey(id, true), + }); + return data.ContentLength; + } catch (error) { + if (error instanceof NotFound) { + return undefined; + } + throw error; + } + } + + private async deleteIncompletePart(id: string): Promise { + await this.client.deleteObject({ + Bucket: this.bucket, + Key: this.partKey(id, true), + }); + } + + /** + * Uploads a stream to s3 using multiple parts + */ + private async uploadParts( + metadata: MetadataValue, + readStream: stream.Readable, + currentPartNumber: number, + offset: number, + ): Promise { + console.log( + `[S3Store] uploadParts starting for ${metadata.file.id}, currentPartNumber: ${currentPartNumber}, offset: ${offset}`, + ); + + const size = metadata.file.size; + const promises: Promise[] = []; + let pendingChunkFilepath: string | null = null; + let bytesUploaded = 0; + let permit: Permit | undefined = undefined; + + const optimalPartSize = this.calcOptimalPartSize(size); + console.log(`[S3Store] Using optimal part size: ${optimalPartSize} bytes for ${metadata.file.id}`); + + const splitterStream = new StreamSplitter({ + chunkSize: optimalPartSize, + directory: os.tmpdir(), + }) + .on('beforeChunkStarted', async () => { + console.log(`[S3Store] Acquiring semaphore permit for ${metadata.file.id}`); + permit = await this.partUploadSemaphore.acquire(); + }) + .on('chunkStarted', (filepath) => { + console.log(`[S3Store] Chunk started for ${metadata.file.id}, file: ${filepath}`); + pendingChunkFilepath = filepath; + }) + .on('chunkFinished', (chunkInfo: ChunkInfo) => { + const { size: partSize, path } = chunkInfo; + console.log(`[S3Store] Chunk finished for ${metadata.file.id}, size: ${partSize}, path: ${path}`); + pendingChunkFilepath = null; + + const acquiredPermit = permit; + const partNumber = currentPartNumber++; + + offset += partSize; + + const isFinalPart = size === offset; + console.log( + `[S3Store] Processing part #${partNumber} for ${metadata.file.id}, isFinalPart: ${isFinalPart}, partSize: ${partSize}`, + ); + + // biome-ignore lint/suspicious/noAsyncPromiseExecutor: it's fine + const deferred = new Promise(async (resolve, reject) => { + try { + // Only the first chunk of each PATCH request can prepend + // an incomplete part (last chunk) from the previous request. + if (!path) { + reject(new Error(`Chunk path is null or undefined for ${metadata.file.id}, part #${partNumber}`)); + return; + } + const readable = fs.createReadStream(path); + readable.on('error', reject); + + if (partSize >= this.minPartSize || isFinalPart) { + console.log(`[S3Store] Uploading part #${partNumber} for ${metadata.file.id} (${partSize} bytes)`); + await this.uploadPart(metadata, readable, partNumber); + } else { + console.log(`[S3Store] Uploading incomplete part for ${metadata.file.id} (${partSize} bytes)`); + await this.uploadIncompletePart(metadata.file.id, readable); + } + + bytesUploaded += partSize; + console.log( + `[S3Store] Part upload completed for ${metadata.file.id}, total bytes uploaded: ${bytesUploaded}`, + ); + resolve(); + } catch (error) { + console.error(`[S3Store] Part upload failed for ${metadata.file.id}, part #${partNumber}:`, error); + reject(error); + } finally { + if (path) { + fsProm.rm(path).catch(() => { + /* ignore */ + }); + } + acquiredPermit?.release(); + } + }); + + promises.push(deferred); + }) + .on('chunkError', (error) => { + console.error(`[S3Store] Chunk error for ${metadata.file.id}:`, error); + permit?.release(); + }); + + try { + console.log(`[S3Store] Starting stream pipeline for ${metadata.file.id}`); + await streamProm.pipeline(readStream, splitterStream); + console.log(`[S3Store] Stream pipeline completed for ${metadata.file.id}`); + } catch (error) { + console.error(`[S3Store] Stream pipeline failed for ${metadata.file.id}:`, error); + if (pendingChunkFilepath !== null) { + try { + await fsProm.rm(pendingChunkFilepath); + } catch { + log(`[${metadata.file.id}] failed to remove chunk ${pendingChunkFilepath}`); + } + } + + promises.push(Promise.reject(error)); + } finally { + console.log(`[S3Store] Waiting for all part uploads to complete for ${metadata.file.id}`); + await Promise.all(promises); + console.log(`[S3Store] All part uploads completed for ${metadata.file.id}`); + } + + console.log(`[S3Store] uploadParts completed for ${metadata.file.id}, total bytes uploaded: ${bytesUploaded}`); + return bytesUploaded; + } + + /** + * Completes a multipart upload on S3. + * This is where S3 concatenates all the uploaded parts. + */ + private async finishMultipartUpload(metadata: MetadataValue, parts: Array) { + const response = await this.client.completeMultipartUpload({ + Bucket: this.bucket, + Key: metadata.file.id, + UploadId: metadata['upload-id'], + MultipartUpload: { + Parts: parts.map((part) => { + return { + ETag: part.ETag, + PartNumber: part.PartNumber, + }; + }), + }, + }); + return response.Location; + } + + /** + * Gets the number of complete parts/chunks already uploaded to S3. + * Retrieves only consecutive parts. + */ + private async retrieveParts(id: string, partNumberMarker?: string): Promise> { + const metadata = await this.getMetadata(id); + + const params: AWS.ListPartsCommandInput = { + Bucket: this.bucket, + Key: id, + UploadId: metadata['upload-id'], + PartNumberMarker: partNumberMarker, + }; + + const data = await this.client.listParts(params); + + let parts = data.Parts ?? []; + + if (data.IsTruncated) { + const rest = await this.retrieveParts(id, data.NextPartNumberMarker); + parts = [...parts, ...rest]; + } + + if (!partNumberMarker) { + // biome-ignore lint/style/noNonNullAssertion: it's fine + parts.sort((a, b) => a.PartNumber! - b.PartNumber!); + } + + return parts; + } + + /** + * Removes cached data for a given file. + */ + private async clearCache(id: string) { + log(`[${id}] removing cached data`); + await this.cache.delete(id); + } + + private calcOptimalPartSize(size?: number): number { + // When upload size is not know we assume largest possible value (`maxUploadSize`) + if (size === undefined) { + size = this.maxUploadSize; + } + + let optimalPartSize: number; + + // When upload is smaller or equal to PreferredPartSize, we upload in just one part. + if (size <= this.preferredPartSize) { + optimalPartSize = size; + } + // Does the upload fit in MaxMultipartParts parts or less with PreferredPartSize. + else if (size <= this.preferredPartSize * this.maxMultipartParts) { + optimalPartSize = this.preferredPartSize; + // The upload is too big for the preferred size. + // We devide the size with the max amount of parts and round it up. + } else { + optimalPartSize = Math.ceil(size / this.maxMultipartParts); + } + + return optimalPartSize; + } + + /** + * Creates a multipart upload on S3 attaching any metadata to it. + * Also, a `${file_id}.info` file is created which holds some information + * about the upload itself like: `upload-id`, `upload-length`, etc. + */ + public async create(upload: Upload) { + log(`[${upload.id}] initializing multipart upload`); + console.log(`[S3Store] Creating multipart upload for ${upload.id}, bucket: ${this.bucket}`); + + const request: AWS.CreateMultipartUploadCommandInput = { + Bucket: this.bucket, + Key: upload.id, + Metadata: { 'tus-version': TUS_RESUMABLE }, + }; + + if (upload.metadata?.contentType) { + request.ContentType = upload.metadata.contentType; + console.log(`[S3Store] Setting ContentType: ${upload.metadata.contentType}`); + } + + if (upload.metadata?.cacheControl) { + request.CacheControl = upload.metadata.cacheControl; + } + + upload.creation_date = new Date().toISOString(); + + try { + console.log(`[S3Store] Sending createMultipartUpload request for ${upload.id}`); + const res = await this.client.createMultipartUpload(request); + console.log(`[S3Store] Multipart upload created successfully, UploadId: ${res.UploadId}`); + + upload.storage = { + type: 's3', + path: res.Key as string, + bucket: this.bucket, + }; + + await this.saveMetadata(upload, res.UploadId as string); + log(`[${upload.id}] multipart upload created (${res.UploadId})`); + console.log(`[S3Store] Upload creation completed for ${upload.id}`); + + return upload; + } catch (error) { + console.error(`[S3Store] Failed to create multipart upload for ${upload.id}:`, error); + throw error; + } + } + + async read(id: string) { + const data = await this.client.getObject({ + Bucket: this.bucket, + Key: id, + }); + return data.Body as Readable; + } + + /** + * Write to the file, starting at the provided offset + */ + public async write(src: stream.Readable, id: string, offset: number): Promise { + console.log(`[S3Store] Starting write operation for ${id}, offset: ${offset}`); + + try { + // Metadata request needs to happen first + console.log(`[S3Store] Retrieving metadata for ${id}`); + const metadata = await this.getMetadata(id); + console.log(`[S3Store] Retrieved metadata for ${id}, file size: ${metadata.file.size}`); + + const parts = await this.retrieveParts(id); + console.log(`[S3Store] Retrieved ${parts.length} existing parts for ${id}`); + + // biome-ignore lint/style/noNonNullAssertion: it's fine + const partNumber: number = parts.length > 0 ? (parts[parts.length - 1]?.PartNumber ?? 0) : 0; + const nextPartNumber = partNumber + 1; + console.log(`[S3Store] Next part number will be: ${nextPartNumber}`); + + const incompletePart = await this.downloadIncompletePart(id); + const requestedOffset = offset; + + if (incompletePart) { + console.log(`[S3Store] Found incomplete part for ${id}, size: ${incompletePart.size}`); + // once the file is on disk, we delete the incomplete part + await this.deleteIncompletePart(id); + + offset = requestedOffset - incompletePart.size; + src = new MultiStream([incompletePart.createReader({ cleanUpOnEnd: true }), src]); + } + + console.log(`[S3Store] Starting uploadParts for ${id}`); + const bytesUploaded = await this.uploadParts(metadata, src, nextPartNumber, offset); + console.log(`[S3Store] uploadParts completed for ${id}, bytes uploaded: ${bytesUploaded}`); + + // The size of the incomplete part should not be counted, because the + // process of the incomplete part should be fully transparent to the user. + const newOffset = requestedOffset + bytesUploaded - (incompletePart?.size ?? 0); + console.log(`[S3Store] New offset for ${id}: ${newOffset}, file size: ${metadata.file.size}`); + + if (metadata.file.size === newOffset) { + console.log(`[S3Store] Upload completed for ${id}, finishing multipart upload`); + try { + const parts = await this.retrieveParts(id); + console.log(`[S3Store] Retrieved ${parts.length} parts for completion`); + + await this.finishMultipartUpload(metadata, parts); + console.log(`[S3Store] Multipart upload finished successfully for ${id}`); + + await this.completeMetadata(metadata.file); + console.log(`[S3Store] Metadata completed for ${id}`); + + await this.clearCache(id); + console.log(`[S3Store] Cache cleared for ${id}`); + } catch (error) { + log(`[${id}] failed to finish upload`, error); + console.error(`[S3Store] Failed to finish upload for ${id}:`, error); + throw error; + } + } + + return newOffset; + } catch (error) { + console.error(`[S3Store] Write operation failed for ${id}:`, error); + throw error; + } + } + + public async getUpload(id: string): Promise { + let metadata: MetadataValue; + try { + metadata = await this.getMetadata(id); + } catch (error) { + log('getUpload: No file found.', error); + throw ERRORS.FILE_NOT_FOUND; + } + + let offset = 0; + + try { + const parts = await this.retrieveParts(id); + offset = calcOffsetFromParts(parts); + } catch (error: any) { + // Check if the error is caused by the upload not being found. This happens + // when the multipart upload has already been completed or aborted. Since + // we already found the info object, we know that the upload has been + // completed and therefore can ensure the the offset is the size. + // AWS S3 returns NoSuchUpload, but other implementations, such as DigitalOcean + // Spaces, can also return NoSuchKey. + if (error.Code === 'NoSuchUpload' || error.Code === 'NoSuchKey') { + return new Upload({ + ...metadata.file, + offset: metadata.file.size as number, + size: metadata.file.size, + metadata: metadata.file.metadata, + storage: metadata.file.storage, + }); + } + + log(error); + throw error; + } + + const incompletePartSize = await this.getIncompletePartSize(id); + + return new Upload({ + ...metadata.file, + offset: offset + (incompletePartSize ?? 0), + size: metadata.file.size, + storage: metadata.file.storage, + }); + } + + public async declareUploadLength(file_id: string, upload_length: number) { + const { file, 'upload-id': uploadId } = await this.getMetadata(file_id); + if (!file) { + throw ERRORS.FILE_NOT_FOUND; + } + + file.size = upload_length; + + await this.saveMetadata(file, uploadId); + } + + public async remove(id: string): Promise { + try { + const { 'upload-id': uploadId } = await this.getMetadata(id); + if (uploadId) { + await this.client.abortMultipartUpload({ + Bucket: this.bucket, + Key: id, + UploadId: uploadId, + }); + } + } catch (error: any) { + if (error?.code && ['NotFound', 'NoSuchKey', 'NoSuchUpload'].includes(error.Code)) { + log('remove: No file found.', error); + throw ERRORS.FILE_NOT_FOUND; + } + throw error; + } + + await this.client.deleteObjects({ + Bucket: this.bucket, + Delete: { + Objects: [{ Key: id }, { Key: this.infoKey(id) }], + }, + }); + + this.clearCache(id); + } + + protected getExpirationDate(created_at: string) { + const date = new Date(created_at); + + return new Date(date.getTime() + this.getExpiration()); + } + + getExpiration(): number { + return this.expirationPeriodInMilliseconds; + } + + async deleteExpired(): Promise { + if (this.getExpiration() === 0) { + return 0; + } + + let keyMarker: string | undefined = undefined; + let uploadIdMarker: string | undefined = undefined; + let isTruncated = true; + let deleted = 0; + + while (isTruncated) { + const listResponse: AWS.ListMultipartUploadsCommandOutput = await this.client.listMultipartUploads({ + Bucket: this.bucket, + KeyMarker: keyMarker, + UploadIdMarker: uploadIdMarker, + }); + + const expiredUploads = + listResponse.Uploads?.filter((multiPartUpload) => { + const initiatedDate = multiPartUpload.Initiated; + return initiatedDate && new Date().getTime() > this.getExpirationDate(initiatedDate.toISOString()).getTime(); + }) || []; + + const objectsToDelete = expiredUploads.reduce( + (all, expiredUpload) => { + all.push( + { + key: this.infoKey(expiredUpload.Key as string), + }, + { + key: this.partKey(expiredUpload.Key as string, true), + }, + ); + return all; + }, + [] as { key: string }[], + ); + + const deletions: Promise[] = []; + + // Batch delete 1000 items at a time + while (objectsToDelete.length > 0) { + const objects = objectsToDelete.splice(0, 1000); + deletions.push( + this.client.deleteObjects({ + Bucket: this.bucket, + Delete: { + Objects: objects.map((object) => ({ + Key: object.key, + })), + }, + }), + ); + } + + const [objectsDeleted] = await Promise.all([ + Promise.all(deletions), + ...expiredUploads.map((expiredUpload) => { + return this.client.abortMultipartUpload({ + Bucket: this.bucket, + Key: expiredUpload.Key, + UploadId: expiredUpload.UploadId, + }); + }), + ]); + + deleted += objectsDeleted.reduce((all, acc) => all + (acc.Deleted?.length ?? 0), 0); + + isTruncated = Boolean(listResponse.IsTruncated); + + if (isTruncated) { + keyMarker = listResponse.NextKeyMarker; + uploadIdMarker = listResponse.NextUploadIdMarker; + } + } + + return deleted; + } + + private async uniqueTmpFileName(template: string): Promise { + let tries = 0; + const maxTries = 10; + + while (tries < maxTries) { + const fileName = template + crypto.randomBytes(10).toString('base64url').slice(0, 10); + const filePath = path.join(os.tmpdir(), fileName); + + try { + await fsProm.lstat(filePath); + // If no error, file exists, so try again + tries++; + } catch (e: any) { + if (e.code === 'ENOENT') { + // File does not exist, return the path + return filePath; + } + throw e; // For other errors, rethrow + } + } + + throw new Error(`Could not find a unique file name after ${maxTries} tries`); + } +} diff --git a/packages/tus/src/utils/kvstores/FileKvStore.ts b/packages/tus/src/utils/kvstores/FileKvStore.ts index 752a776..6ee0c44 100644 --- a/packages/tus/src/utils/kvstores/FileKvStore.ts +++ b/packages/tus/src/utils/kvstores/FileKvStore.ts @@ -1,94 +1,92 @@ -import fs from 'node:fs/promises' -import path from 'node:path' +import fs from 'node:fs/promises'; +import path from 'node:path'; -import type {KvStore} from './Types' -import type {Upload} from '../models' +import type { KvStore } from './Types'; +import type { Upload } from '../models'; /** * 文件键值存储(FileKvStore) - * + * * @description 基于文件系统的键值对存储实现,专门用于存储上传文件的元数据 - * @remarks + * @remarks * - 将上传文件的JSON元数据存储在磁盘上,与上传文件同目录 * - 使用队列机制确保并发安全,每次仅处理一个操作 - * + * * @typeparam T 存储的数据类型,默认为Upload类型 */ export class FileKvStore implements KvStore { - /** 存储目录路径 */ - directory: string - /** - * 构造函数 - * - * @param path 指定存储元数据的目录路径 - */ - constructor(path: string) { - this.directory = path - } - /** - * 根据键获取存储的值 - * - * @param key 键名 - * @returns 返回对应的值,如果不存在则返回undefined - */ - async get(key: string): Promise { - try { - // 读取对应键的JSON文件 - const buffer = await fs.readFile(this.resolve(key), 'utf8') - // 解析JSON并返回 - return JSON.parse(buffer as string) - } catch { - // 文件不存在或读取失败时返回undefined - return undefined - } - } - /** - * 存储键值对 - * @param key 键名 - * @param value 要存储的值 - */ - async set(key: string, value: T): Promise { - // 将值转换为JSON并写入文件 - await fs.writeFile(this.resolve(key), JSON.stringify(value)) - } - /** - * 删除指定键的值 - * - * @param key 要删除的键名 - */ - async delete(key: string): Promise { - // 删除对应的JSON文件 - await fs.rm(this.resolve(key)) - } + /** 存储目录路径 */ + directory: string; + /** + * 构造函数 + * + * @param path 指定存储元数据的目录路径 + */ + constructor(path: string) { + this.directory = path; + } + /** + * 根据键获取存储的值 + * + * @param key 键名 + * @returns 返回对应的值,如果不存在则返回undefined + */ + async get(key: string): Promise { + try { + // 读取对应键的JSON文件 + const buffer = await fs.readFile(this.resolve(key), 'utf8'); + // 解析JSON并返回 + return JSON.parse(buffer as string); + } catch { + // 文件不存在或读取失败时返回undefined + return undefined; + } + } + /** + * 存储键值对 + * @param key 键名 + * @param value 要存储的值 + */ + async set(key: string, value: T): Promise { + // 将值转换为JSON并写入文件 + await fs.writeFile(this.resolve(key), JSON.stringify(value)); + } + /** + * 删除指定键的值 + * + * @param key 要删除的键名 + */ + async delete(key: string): Promise { + // 删除对应的JSON文件 + await fs.rm(this.resolve(key)); + } - /** - * 列出所有存储的键 - * - * @returns 返回已存储的键名数组 - */ - async list(): Promise> { - // 读取目录中的所有文件 - const files = await fs.readdir(this.directory) - // 对文件名进行排序 - const sorted = files.sort((a, b) => a.localeCompare(b)) - // 提取文件名(不包含扩展名) - const name = (file: string) => path.basename(file, '.json') - // 过滤出有效的tus文件ID - // 仅保留成对出现的文件(文件名相同,一个有.json扩展名) - return sorted.filter( - (file, idx) => idx < sorted.length - 1 && name(file) === name(sorted[idx + 1]) - ) - } + /** + * 列出所有存储的键 + * + * @returns 返回已存储的键名数组 + */ + async list(): Promise> { + // 读取目录中的所有文件 + const files = await fs.readdir(this.directory); + // 对文件名进行排序 + const sorted = files.sort((a, b) => a.localeCompare(b)); + // 提取文件名(不包含扩展名) + const name = (file: string) => path.basename(file, '.json'); + // 过滤出有效的tus文件ID + // 仅保留成对出现的文件(文件名相同,一个有.json扩展名) + return sorted.filter((file, idx) => idx < sorted.length - 1 && name(file) === name(sorted[idx + 1]!)); + } - /** - * 将键转换为完整的文件路径 - * - * @param key 键名 - * @returns 返回完整的文件路径 - * @private - */ - private resolve(key: string): string { - // 将键名转换为完整的JSON文件路径 - return path.resolve(this.directory, `${key}.json`) - } -} \ No newline at end of file + /** + * 将键转换为完整的文件路径 + * + * @param key 键名 + * @returns 返回完整的文件路径 + * @private + */ + private resolve(key: string): string { + // 将键名转换为完整的JSON文件路径 + return path.resolve(this.directory, `${key}.json`); + } +} diff --git a/packages/tus/src/utils/models/Metadata.ts b/packages/tus/src/utils/models/Metadata.ts index a8a2914..37293b6 100644 --- a/packages/tus/src/utils/models/Metadata.ts +++ b/packages/tus/src/utils/models/Metadata.ts @@ -1,10 +1,10 @@ -import type {Upload} from './Upload' +import type { Upload } from './Upload'; // 定义ASCII码中的空格和逗号字符的码点 -const ASCII_SPACE = ' '.codePointAt(0) -const ASCII_COMMA = ','.codePointAt(0) +const ASCII_SPACE = ' '.codePointAt(0); +const ASCII_COMMA = ','.codePointAt(0); // 定义用于验证Base64字符串的正则表达式 -const BASE64_REGEX = /^[\d+/A-Za-z]*={0,2}$/ +const BASE64_REGEX = /^[\d+/A-Za-z]*={0,2}$/; /** * 验证元数据键的有效性 @@ -12,24 +12,24 @@ const BASE64_REGEX = /^[\d+/A-Za-z]*={0,2}$/ * @returns 如果键有效则返回true,否则返回false */ export function validateKey(key: string) { - // 如果键的长度为0,则无效 - if (key.length === 0) { - return false - } + // 如果键的长度为0,则无效 + if (key.length === 0) { + return false; + } - // 遍历键的每个字符,检查其码点是否在有效范围内 - for (let i = 0; i < key.length; ++i) { - const charCodePoint = key.codePointAt(i) as number - if ( - charCodePoint > 127 || // 非ASCII字符 - charCodePoint === ASCII_SPACE || // 空格字符 - charCodePoint === ASCII_COMMA // 逗号字符 - ) { - return false - } - } + // 遍历键的每个字符,检查其码点是否在有效范围内 + for (let i = 0; i < key.length; ++i) { + const charCodePoint = key.codePointAt(i) as number; + if ( + charCodePoint > 127 || // 非ASCII字符 + charCodePoint === ASCII_SPACE || // 空格字符 + charCodePoint === ASCII_COMMA // 逗号字符 + ) { + return false; + } + } - return true + return true; } /** @@ -38,13 +38,13 @@ export function validateKey(key: string) { * @returns 如果值是有效的Base64字符串则返回true,否则返回false */ export function validateValue(value: string) { - // Base64字符串的长度必须是4的倍数 - if (value.length % 4 !== 0) { - return false - } + // Base64字符串的长度必须是4的倍数 + if (value.length % 4 !== 0) { + return false; + } - // 使用正则表达式验证Base64字符串的格式 - return BASE64_REGEX.test(value) + // 使用正则表达式验证Base64字符串的格式 + return BASE64_REGEX.test(value); } /** @@ -54,32 +54,33 @@ export function validateValue(value: string) { * @throws 如果元数据字符串无效则抛出错误 */ export function parse(str?: string) { - const meta: Record = {} + const meta: Record = {}; - // 如果字符串为空或仅包含空白字符,则无效 - if (!str || str.trim().length === 0) { - throw new Error('Metadata string is not valid') - } + // 如果字符串为空或仅包含空白字符,则无效 + if (!str || str.trim().length === 0) { + throw new Error('Metadata string is not valid'); + } - // 遍历字符串中的每个键值对 - for (const pair of str.split(',')) { - const tokens = pair.split(' ') - const [key, value] = tokens - // 验证键和值的有效性,并确保键在元数据对象中不存在 - if ( - ((tokens.length === 1 && validateKey(key)) || - (tokens.length === 2 && validateKey(key) && validateValue(value))) && - !(key in meta) - ) { - // 如果值存在,则将其从Base64解码为UTF-8字符串 - const decodedValue = value ? Buffer.from(value, 'base64').toString('utf8') : null - meta[key] = decodedValue - } else { - throw new Error('Metadata string is not valid') - } - } + // 遍历字符串中的每个键值对 + for (const pair of str.split(',')) { + const tokens = pair.split(' '); + const [key, value] = tokens; + // 验证键和值的有效性,并确保键在元数据对象中不存在 + if ( + key && + ((tokens.length === 1 && validateKey(key)) || + (tokens.length === 2 && validateKey(key) && value && validateValue(value))) && + !(key in meta) + ) { + // 如果值存在,则将其从Base64解码为UTF-8字符串 + const decodedValue = value ? Buffer.from(value, 'base64').toString('utf8') : null; + meta[key] = decodedValue; + } else { + throw new Error('Metadata string is not valid'); + } + } - return meta + return meta; } /** @@ -88,16 +89,16 @@ export function parse(str?: string) { * @returns 返回序列化后的元数据字符串 */ export function stringify(metadata: NonNullable): string { - return Object.entries(metadata) - .map(([key, value]) => { - // 如果值为null,则仅返回键 - if (value === null) { - return key - } + return Object.entries(metadata) + .map(([key, value]) => { + // 如果值为null,则仅返回键 + if (value === null) { + return key; + } - // 将值编码为Base64字符串,并与键组合 - const encodedValue = Buffer.from(value, 'utf8').toString('base64') - return `${key} ${encodedValue}` - }) - .join(',') -} \ No newline at end of file + // 将值编码为Base64字符串,并与键组合 + const encodedValue = Buffer.from(value, 'utf8').toString('base64'); + return `${key} ${encodedValue}`; + }) + .join(','); +} diff --git a/packages/tus/src/utils/models/StreamSplitter.ts b/packages/tus/src/utils/models/StreamSplitter.ts index 4dd94da..b6b9d45 100644 --- a/packages/tus/src/utils/models/StreamSplitter.ts +++ b/packages/tus/src/utils/models/StreamSplitter.ts @@ -1,8 +1,8 @@ /* global BufferEncoding */ -import crypto from 'node:crypto' -import fs from 'node:fs/promises' -import path from 'node:path' -import stream from 'node:stream' +import crypto from 'node:crypto'; +import fs from 'node:fs/promises'; +import path from 'node:path'; +import stream from 'node:stream'; /** * 生成指定长度的随机字符串 @@ -10,174 +10,181 @@ import stream from 'node:stream' * @returns 随机生成的字符串 */ function randomString(size: number) { - return crypto.randomBytes(size).toString('base64url').slice(0, size) + return crypto.randomBytes(size).toString('base64url').slice(0, size); } +/** + * 块信息类型 + */ +export type ChunkInfo = { + path: string | null; // 块文件路径 + size: number; // 块大小 +}; + /** * StreamSplitter 配置选项 */ type Options = { - chunkSize: number // 每个块的大小 - directory: string // 存储块的目录 -} + chunkSize: number; // 每个块的大小 + directory: string; // 存储块的目录 +}; /** * 回调函数类型 */ -type Callback = (error: Error | null) => void +type Callback = (error: Error | null) => void; /** * StreamSplitter 类,用于将流数据分割成指定大小的块 */ export class StreamSplitter extends stream.Writable { - directory: Options['directory'] // 存储块的目录 - currentChunkPath: string | null // 当前块的路径 - currentChunkSize: number // 当前块的大小 - fileHandle: fs.FileHandle | null // 当前块的文件句柄 - filenameTemplate: string // 文件名模板 - chunkSize: Options['chunkSize'] // 每个块的大小 - part: number // 当前块的编号 + directory: Options['directory']; // 存储块的目录 + currentChunkPath: string | null; // 当前块的路径 + currentChunkSize: number; // 当前块的大小 + fileHandle: fs.FileHandle | null; // 当前块的文件句柄 + filenameTemplate: string; // 文件名模板 + chunkSize: Options['chunkSize']; // 每个块的大小 + part: number; // 当前块的编号 - /** - * 构造函数 - * @param chunkSize 每个块的大小 - * @param directory 存储块的目录 - * @param options 可选的流写入选项 - */ - constructor({ chunkSize, directory }: Options, options?: stream.WritableOptions) { - super(options) - this.chunkSize = chunkSize - this.currentChunkPath = null - this.currentChunkSize = 0 - this.fileHandle = null - this.directory = directory - this.filenameTemplate = randomString(10) - this.part = 0 + /** + * 构造函数 + * @param chunkSize 每个块的大小 + * @param directory 存储块的目录 + * @param options 可选的流写入选项 + */ + constructor({ chunkSize, directory }: Options, options?: stream.WritableOptions) { + super(options); + this.chunkSize = chunkSize; + this.currentChunkPath = null; + this.currentChunkSize = 0; + this.fileHandle = null; + this.directory = directory; + this.filenameTemplate = randomString(10); + this.part = 0; - this.on('error', this._handleError.bind(this)) - } + this.on('error', this._handleError.bind(this)); + } - /** - * 写入数据到当前块 - * @param chunk 数据块 - * @param _ 编码方式(未使用) - * @param callback 回调函数 - */ - async _write(chunk: Buffer, _: BufferEncoding, callback: Callback) { - try { - // 如果当前没有文件句柄,则创建一个新的块 - if (this.fileHandle === null) { - await this._newChunk() - } + /** + * 写入数据到当前块 + * @param chunk 数据块 + * @param _ 编码方式(未使用) + * @param callback 回调函数 + */ + async _write(chunk: Buffer, _: BufferEncoding, callback: Callback) { + try { + // 如果当前没有文件句柄,则创建一个新的块 + if (this.fileHandle === null) { + await this._newChunk(); + } - let overflow = this.currentChunkSize + chunk.length - this.chunkSize + let overflow = this.currentChunkSize + chunk.length - this.chunkSize; - // 如果写入的数据会导致当前块超过指定大小,则进行分割 - while (overflow > 0) { - // 只写入不超过指定大小的部分 - await this._writeChunk(chunk.subarray(0, chunk.length - overflow)) - await this._finishChunk() + // 如果写入的数据会导致当前块超过指定大小,则进行分割 + while (overflow > 0) { + // 只写入不超过指定大小的部分 + await this._writeChunk(chunk.subarray(0, chunk.length - overflow)); + await this._finishChunk(); - // 剩余的数据写入新的块 - await this._newChunk() - chunk = chunk.subarray(chunk.length - overflow, chunk.length) - overflow = this.currentChunkSize + chunk.length - this.chunkSize - } + // 剩余的数据写入新的块 + await this._newChunk(); + chunk = chunk.subarray(chunk.length - overflow, chunk.length); + overflow = this.currentChunkSize + chunk.length - this.chunkSize; + } - // 如果数据块小于指定大小,则直接写入 - await this._writeChunk(chunk) - callback(null) - } catch (error: any) { - callback(error) - } - } + // 如果数据块小于指定大小,则直接写入 + await this._writeChunk(chunk); + callback(null); + } catch (error: any) { + callback(error); + } + } - /** - * 完成写入操作 - * @param callback 回调函数 - */ - async _final(callback: Callback) { - if (this.fileHandle === null) { - callback(null) - return - } + /** + * 完成写入操作 + * @param callback 回调函数 + */ + async _final(callback: Callback) { + if (this.fileHandle === null) { + callback(null); + return; + } - try { - await this._finishChunk() - callback(null) - } catch (error: any) { - callback(error) - } - } + try { + await this._finishChunk(); + callback(null); + } catch (error: any) { + callback(error); + } + } - /** - * 写入数据块到文件 - * @param chunk 数据块 - */ - async _writeChunk(chunk: Buffer): Promise { - await fs.appendFile(this.fileHandle as fs.FileHandle, chunk) - this.currentChunkSize += chunk.length - } + /** + * 写入数据块到文件 + * @param chunk 数据块 + */ + async _writeChunk(chunk: Buffer): Promise { + await fs.appendFile(this.fileHandle as fs.FileHandle, chunk); + this.currentChunkSize += chunk.length; + } - /** - * 处理错误 - */ - async _handleError() { - await this.emitEvent('chunkError', this.currentChunkPath) - // 如果发生错误,停止写入操作,防止数据丢失 - if (this.fileHandle === null) { return } - await this.fileHandle.close() - this.currentChunkPath = null - this.fileHandle = null - } + /** + * 处理错误 + */ + async _handleError() { + await this.emitEvent('chunkError', this.currentChunkPath); + // 如果发生错误,停止写入操作,防止数据丢失 + if (this.fileHandle === null) { + return; + } + await this.fileHandle.close(); + this.currentChunkPath = null; + this.fileHandle = null; + } - /** - * 完成当前块的写入 - */ - async _finishChunk(): Promise { - if (this.fileHandle === null) { - return - } + /** + * 完成当前块的写入 + */ + async _finishChunk(): Promise { + if (this.fileHandle === null) { + return; + } - await this.fileHandle.close() + await this.fileHandle.close(); - await this.emitEvent('chunkFinished', { - path: this.currentChunkPath, - size: this.currentChunkSize, - }) + await this.emitEvent('chunkFinished', { + path: this.currentChunkPath, + size: this.currentChunkSize, + }); - this.currentChunkPath = null - this.fileHandle = null - this.currentChunkSize = 0 - this.part += 1 - } + this.currentChunkPath = null; + this.fileHandle = null; + this.currentChunkSize = 0; + this.part += 1; + } - /** - * 触发事件 - * @param name 事件名称 - * @param payload 事件负载 - */ - async emitEvent(name: string, payload: T) { - const listeners = this.listeners(name) - for (const listener of listeners) { - await listener(payload) - } - } + /** + * 触发事件 + * @param name 事件名称 + * @param payload 事件负载 + */ + async emitEvent(name: string, payload: T) { + const listeners = this.listeners(name); + for (const listener of listeners) { + await listener(payload); + } + } - /** - * 创建新的块 - */ - async _newChunk(): Promise { - const currentChunkPath = path.join( - this.directory, - `${this.filenameTemplate}-${this.part}` - ) - await this.emitEvent('beforeChunkStarted', currentChunkPath) - this.currentChunkPath = currentChunkPath + /** + * 创建新的块 + */ + async _newChunk(): Promise { + const currentChunkPath = path.join(this.directory, `${this.filenameTemplate}-${this.part}`); + await this.emitEvent('beforeChunkStarted', currentChunkPath); + this.currentChunkPath = currentChunkPath; - const fileHandle = await fs.open(this.currentChunkPath, 'w') - await this.emitEvent('chunkStarted', this.currentChunkPath) - this.currentChunkSize = 0 - this.fileHandle = fileHandle - } -} \ No newline at end of file + const fileHandle = await fs.open(this.currentChunkPath, 'w'); + await this.emitEvent('chunkStarted', this.currentChunkPath); + this.currentChunkSize = 0; + this.fileHandle = fileHandle; + } +} diff --git a/packages/tus/src/utils/models/index.ts b/packages/tus/src/utils/models/index.ts index 80724bc..c0f2a55 100644 --- a/packages/tus/src/utils/models/index.ts +++ b/packages/tus/src/utils/models/index.ts @@ -1,8 +1,8 @@ -export { DataStore } from './DataStore' -export * as Metadata from './Metadata' -export { StreamSplitter } from './StreamSplitter' -export { StreamLimiter } from './StreamLimiter' -export { Uid } from './Uid' -export { Upload } from './Upload' -export type { Locker, Lock, RequestRelease } from './Locker' -export type { CancellationContext } from './Context' +export { DataStore } from './DataStore'; +export * as Metadata from './Metadata'; +export { StreamSplitter, type ChunkInfo } from './StreamSplitter'; +export { StreamLimiter } from './StreamLimiter'; +export { Uid } from './Uid'; +export { Upload } from './Upload'; +export type { Locker, Lock, RequestRelease } from './Locker'; +export type { CancellationContext } from './Context'; diff --git a/packages/tus/tsup.config.ts b/packages/tus/tsup.config.ts new file mode 100644 index 0000000..ee76bdd --- /dev/null +++ b/packages/tus/tsup.config.ts @@ -0,0 +1,20 @@ +import { defineConfig } from 'tsup'; + +export default defineConfig({ + entry: ['src/index.ts'], + format: ['esm', 'cjs'], + dts: true, + clean: true, + outDir: 'dist', + treeshake: true, + sourcemap: true, + external: [ + '@aws-sdk/client-s3', + '@shopify/semaphore', + 'debug', + 'lodash.throttle', + 'multistream', + 'ioredis', + '@redis/client', + ], +}); diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index e26a1ef..0cf6cbe 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -17,6 +17,9 @@ importers: '@types/node': specifier: ^20 version: 20.17.50 + dotenv: + specifier: 16.4.5 + version: 16.4.5 prettier: specifier: ^3.5.3 version: 3.5.3 @@ -47,6 +50,9 @@ importers: '@repo/oidc-provider': specifier: workspace:* version: link:../../packages/oidc-provider + '@repo/storage': + specifier: workspace:* + version: link:../../packages/storage '@repo/tus': specifier: workspace:* version: link:../../packages/tus @@ -171,6 +177,9 @@ importers: superjson: specifier: ^2.2.2 version: 2.2.2 + tus-js-client: + specifier: ^4.3.1 + version: 4.3.1 valibot: specifier: ^1.1.0 version: 1.1.0(typescript@5.8.3) @@ -442,6 +451,46 @@ importers: specifier: ^5.8.3 version: 5.8.3 + packages/storage: + dependencies: + '@hono/zod-validator': + specifier: ^0.5.0 + version: 0.5.0(hono@4.7.10)(zod@3.25.23) + '@repo/db': + specifier: workspace:* + version: link:../db + '@repo/tus': + specifier: workspace:* + version: link:../tus + dotenv: + specifier: 16.4.5 + version: 16.4.5 + hono: + specifier: ^4.7.10 + version: 4.7.10 + ioredis: + specifier: 5.4.1 + version: 5.4.1 + jose: + specifier: ^6.0.11 + version: 6.0.11 + nanoid: + specifier: ^5.1.5 + version: 5.1.5 + transliteration: + specifier: ^2.3.5 + version: 2.3.5 + zod: + specifier: ^3.25.23 + version: 3.25.23 + devDependencies: + '@types/node': + specifier: ^22.15.21 + version: 22.15.21 + typescript: + specifier: ^5.0.0 + version: 5.8.3 + packages/tus: dependencies: '@aws-sdk/client-s3': @@ -1247,85 +1296,72 @@ packages: resolution: {integrity: sha512-IVfGJa7gjChDET1dK9SekxFFdflarnUB8PwW8aGwEoF3oAsSDuNUTYS+SKDOyOJxQyDC1aPFMuRYLoDInyV9Ew==} cpu: [arm64] os: [linux] - libc: [glibc] '@img/sharp-libvips-linux-arm@1.1.0': resolution: {integrity: sha512-s8BAd0lwUIvYCJyRdFqvsj+BJIpDBSxs6ivrOPm/R7piTs5UIwY5OjXrP2bqXC9/moGsyRa37eYWYCOGVXxVrA==} cpu: [arm] os: [linux] - libc: [glibc] '@img/sharp-libvips-linux-ppc64@1.1.0': resolution: {integrity: sha512-tiXxFZFbhnkWE2LA8oQj7KYR+bWBkiV2nilRldT7bqoEZ4HiDOcePr9wVDAZPi/Id5fT1oY9iGnDq20cwUz8lQ==} cpu: [ppc64] os: [linux] - libc: [glibc] '@img/sharp-libvips-linux-s390x@1.1.0': resolution: {integrity: sha512-xukSwvhguw7COyzvmjydRb3x/09+21HykyapcZchiCUkTThEQEOMtBj9UhkaBRLuBrgLFzQ2wbxdeCCJW/jgJA==} cpu: [s390x] os: [linux] - libc: [glibc] '@img/sharp-libvips-linux-x64@1.1.0': resolution: {integrity: sha512-yRj2+reB8iMg9W5sULM3S74jVS7zqSzHG3Ol/twnAAkAhnGQnpjj6e4ayUz7V+FpKypwgs82xbRdYtchTTUB+Q==} cpu: [x64] os: [linux] - libc: [glibc] '@img/sharp-libvips-linuxmusl-arm64@1.1.0': resolution: {integrity: sha512-jYZdG+whg0MDK+q2COKbYidaqW/WTz0cc1E+tMAusiDygrM4ypmSCjOJPmFTvHHJ8j/6cAGyeDWZOsK06tP33w==} cpu: [arm64] os: [linux] - libc: [musl] '@img/sharp-libvips-linuxmusl-x64@1.1.0': resolution: {integrity: sha512-wK7SBdwrAiycjXdkPnGCPLjYb9lD4l6Ze2gSdAGVZrEL05AOUJESWU2lhlC+Ffn5/G+VKuSm6zzbQSzFX/P65A==} cpu: [x64] os: [linux] - libc: [musl] '@img/sharp-linux-arm64@0.34.2': resolution: {integrity: sha512-D8n8wgWmPDakc83LORcfJepdOSN6MvWNzzz2ux0MnIbOqdieRZwVYY32zxVx+IFUT8er5KPcyU3XXsn+GzG/0Q==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [arm64] os: [linux] - libc: [glibc] '@img/sharp-linux-arm@0.34.2': resolution: {integrity: sha512-0DZzkvuEOqQUP9mo2kjjKNok5AmnOr1jB2XYjkaoNRwpAYMDzRmAqUIa1nRi58S2WswqSfPOWLNOr0FDT3H5RQ==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [arm] os: [linux] - libc: [glibc] '@img/sharp-linux-s390x@0.34.2': resolution: {integrity: sha512-EGZ1xwhBI7dNISwxjChqBGELCWMGDvmxZXKjQRuqMrakhO8QoMgqCrdjnAqJq/CScxfRn+Bb7suXBElKQpPDiw==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [s390x] os: [linux] - libc: [glibc] '@img/sharp-linux-x64@0.34.2': resolution: {integrity: sha512-sD7J+h5nFLMMmOXYH4DD9UtSNBD05tWSSdWAcEyzqW8Cn5UxXvsHAxmxSesYUsTOBmUnjtxghKDl15EvfqLFbQ==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [x64] os: [linux] - libc: [glibc] '@img/sharp-linuxmusl-arm64@0.34.2': resolution: {integrity: sha512-NEE2vQ6wcxYav1/A22OOxoSOGiKnNmDzCYFOZ949xFmrWZOVII1Bp3NqVVpvj+3UeHMFyN5eP/V5hzViQ5CZNA==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [arm64] os: [linux] - libc: [musl] '@img/sharp-linuxmusl-x64@0.34.2': resolution: {integrity: sha512-DOYMrDm5E6/8bm/yQLCWyuDJwUnlevR8xtF8bs+gjZ7cyUNYXiSf/E8Kp0Ss5xasIaXSHzb888V1BE4i1hFhAA==} engines: {node: ^18.17.0 || ^20.3.0 || >=21.0.0} cpu: [x64] os: [linux] - libc: [musl] '@img/sharp-wasm32@0.34.2': resolution: {integrity: sha512-/VI4mdlJ9zkaq53MbIG6rZY+QRN3MLbR6usYlgITEzi4Rpx5S6LFKsycOQjkOGmqTNmkIdLjEvooFKwww6OpdQ==} @@ -1413,28 +1449,24 @@ packages: engines: {node: '>= 10'} cpu: [arm64] os: [linux] - libc: [glibc] '@next/swc-linux-arm64-musl@15.3.2': resolution: {integrity: sha512-KQkMEillvlW5Qk5mtGA/3Yz0/tzpNlSw6/3/ttsV1lNtMuOHcGii3zVeXZyi4EJmmLDKYcTcByV2wVsOhDt/zg==} engines: {node: '>= 10'} cpu: [arm64] os: [linux] - libc: [musl] '@next/swc-linux-x64-gnu@15.3.2': resolution: {integrity: sha512-uRBo6THWei0chz+Y5j37qzx+BtoDRFIkDzZjlpCItBRXyMPIg079eIkOCl3aqr2tkxL4HFyJ4GHDes7W8HuAUg==} engines: {node: '>= 10'} cpu: [x64] os: [linux] - libc: [glibc] '@next/swc-linux-x64-musl@15.3.2': resolution: {integrity: sha512-+uxFlPuCNx/T9PdMClOqeE8USKzj8tVz37KflT3Kdbx/LOlZBRI2yxuIcmx1mPNK8DwSOMNCr4ureSet7eyC0w==} engines: {node: '>= 10'} cpu: [x64] os: [linux] - libc: [musl] '@next/swc-win32-arm64-msvc@15.3.2': resolution: {integrity: sha512-LLTKmaI5cfD8dVzh5Vt7+OMo+AIOClEdIU/TSKbXXT2iScUTSxOGoBhfuv+FU8R9MLmrkIL1e2fBMkEEjYAtPQ==} @@ -1874,67 +1906,56 @@ packages: resolution: {integrity: sha512-46OzWeqEVQyX3N2/QdiU/CMXYDH/lSHpgfBkuhl3igpZiaB3ZIfSjKuOnybFVBQzjsLwkus2mjaESy8H41SzvA==} cpu: [arm] os: [linux] - libc: [glibc] '@rollup/rollup-linux-arm-musleabihf@4.41.0': resolution: {integrity: sha512-lfgW3KtQP4YauqdPpcUZHPcqQXmTmH4nYU0cplNeW583CMkAGjtImw4PKli09NFi2iQgChk4e9erkwlfYem6Lg==} cpu: [arm] os: [linux] - libc: [musl] '@rollup/rollup-linux-arm64-gnu@4.41.0': resolution: {integrity: sha512-nn8mEyzMbdEJzT7cwxgObuwviMx6kPRxzYiOl6o/o+ChQq23gfdlZcUNnt89lPhhz3BYsZ72rp0rxNqBSfqlqw==} cpu: [arm64] os: [linux] - libc: [glibc] '@rollup/rollup-linux-arm64-musl@4.41.0': resolution: {integrity: sha512-l+QK99je2zUKGd31Gh+45c4pGDAqZSuWQiuRFCdHYC2CSiO47qUWsCcenrI6p22hvHZrDje9QjwSMAFL3iwXwQ==} cpu: [arm64] os: [linux] - libc: [musl] '@rollup/rollup-linux-loongarch64-gnu@4.41.0': resolution: {integrity: sha512-WbnJaxPv1gPIm6S8O/Wg+wfE/OzGSXlBMbOe4ie+zMyykMOeqmgD1BhPxZQuDqwUN+0T/xOFtL2RUWBspnZj3w==} cpu: [loong64] os: [linux] - libc: [glibc] '@rollup/rollup-linux-powerpc64le-gnu@4.41.0': resolution: {integrity: sha512-eRDWR5t67/b2g8Q/S8XPi0YdbKcCs4WQ8vklNnUYLaSWF+Cbv2axZsp4jni6/j7eKvMLYCYdcsv8dcU+a6QNFg==} cpu: [ppc64] os: [linux] - libc: [glibc] '@rollup/rollup-linux-riscv64-gnu@4.41.0': resolution: {integrity: sha512-TWrZb6GF5jsEKG7T1IHwlLMDRy2f3DPqYldmIhnA2DVqvvhY2Ai184vZGgahRrg8k9UBWoSlHv+suRfTN7Ua4A==} cpu: [riscv64] os: [linux] - libc: [glibc] '@rollup/rollup-linux-riscv64-musl@4.41.0': resolution: {integrity: sha512-ieQljaZKuJpmWvd8gW87ZmSFwid6AxMDk5bhONJ57U8zT77zpZ/TPKkU9HpnnFrM4zsgr4kiGuzbIbZTGi7u9A==} cpu: [riscv64] os: [linux] - libc: [musl] '@rollup/rollup-linux-s390x-gnu@4.41.0': resolution: {integrity: sha512-/L3pW48SxrWAlVsKCN0dGLB2bi8Nv8pr5S5ocSM+S0XCn5RCVCXqi8GVtHFsOBBCSeR+u9brV2zno5+mg3S4Aw==} cpu: [s390x] os: [linux] - libc: [glibc] '@rollup/rollup-linux-x64-gnu@4.41.0': resolution: {integrity: sha512-XMLeKjyH8NsEDCRptf6LO8lJk23o9wvB+dJwcXMaH6ZQbbkHu2dbGIUindbMtRN6ux1xKi16iXWu6q9mu7gDhQ==} cpu: [x64] os: [linux] - libc: [glibc] '@rollup/rollup-linux-x64-musl@4.41.0': resolution: {integrity: sha512-m/P7LycHZTvSQeXhFmgmdqEiTqSV80zn6xHaQ1JSqwCtD1YGtwEK515Qmy9DcB2HK4dOUVypQxvhVSy06cJPEg==} cpu: [x64] os: [linux] - libc: [musl] '@rollup/rollup-win32-arm64-msvc@4.41.0': resolution: {integrity: sha512-4yodtcOrFHpbomJGVEqZ8fzD4kfBeCbpsUy5Pqk4RluXOdsWdjLnjhiKy2w3qzcASWd04fp52Xz7JKarVJ5BTg==} @@ -2271,28 +2292,24 @@ packages: engines: {node: '>=10'} cpu: [arm64] os: [linux] - libc: [glibc] '@swc/core-linux-arm64-musl@1.11.29': resolution: {integrity: sha512-PwjB10BC0N+Ce7RU/L23eYch6lXFHz7r3NFavIcwDNa/AAqywfxyxh13OeRy+P0cg7NDpWEETWspXeI4Ek8otw==} engines: {node: '>=10'} cpu: [arm64] os: [linux] - libc: [musl] '@swc/core-linux-x64-gnu@1.11.29': resolution: {integrity: sha512-i62vBVoPaVe9A3mc6gJG07n0/e7FVeAvdD9uzZTtGLiuIfVfIBta8EMquzvf+POLycSk79Z6lRhGPZPJPYiQaA==} engines: {node: '>=10'} cpu: [x64] os: [linux] - libc: [glibc] '@swc/core-linux-x64-musl@1.11.29': resolution: {integrity: sha512-YER0XU1xqFdK0hKkfSVX1YIyCvMDI7K07GIpefPvcfyNGs38AXKhb2byySDjbVxkdl4dycaxxhRyhQ2gKSlsFQ==} engines: {node: '>=10'} cpu: [x64] os: [linux] - libc: [musl] '@swc/core-win32-arm64-msvc@1.11.29': resolution: {integrity: sha512-po+WHw+k9g6FAg5IJ+sMwtA/fIUL3zPQ4m/uJgONBATCVnDDkyW6dBA49uHNVtSEvjvhuD8DVWdFP847YTcITw==} @@ -2371,28 +2388,24 @@ packages: engines: {node: '>= 10'} cpu: [arm64] os: [linux] - libc: [glibc] '@tailwindcss/oxide-linux-arm64-musl@4.1.7': resolution: {integrity: sha512-PjGuNNmJeKHnP58M7XyjJyla8LPo+RmwHQpBI+W/OxqrwojyuCQ+GUtygu7jUqTEexejZHr/z3nBc/gTiXBj4A==} engines: {node: '>= 10'} cpu: [arm64] os: [linux] - libc: [musl] '@tailwindcss/oxide-linux-x64-gnu@4.1.7': resolution: {integrity: sha512-HMs+Va+ZR3gC3mLZE00gXxtBo3JoSQxtu9lobbZd+DmfkIxR54NO7Z+UQNPsa0P/ITn1TevtFxXTpsRU7qEvWg==} engines: {node: '>= 10'} cpu: [x64] os: [linux] - libc: [glibc] '@tailwindcss/oxide-linux-x64-musl@4.1.7': resolution: {integrity: sha512-MHZ6jyNlutdHH8rd+YTdr3QbXrHXqwIhHw9e7yXEBcQdluGwhpQY2Eku8UZK6ReLaWtQ4gijIv5QoM5eE+qlsA==} engines: {node: '>= 10'} cpu: [x64] os: [linux] - libc: [musl] '@tailwindcss/oxide-wasm32-wasi@4.1.7': resolution: {integrity: sha512-ANaSKt74ZRzE2TvJmUcbFQ8zS201cIPxUDm5qez5rLEwWkie2SkGtA4P+GPTj+u8N6JbPrC8MtY8RmJA35Oo+A==} @@ -2898,6 +2911,9 @@ packages: buffer-crc32@0.2.13: resolution: {integrity: sha512-VO9Ht/+p3SN7SKWqcrgEzjGbRSJYTx+Q1pTQC0wrWqHx0vpJraQ6GtHx8tvcg1rlK1byhU5gccxgOgj7B0TDkQ==} + buffer-from@1.1.2: + resolution: {integrity: sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==} + buffer@5.7.1: resolution: {integrity: sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ==} @@ -3062,6 +3078,9 @@ packages: resolution: {integrity: sha512-1rXeuUUiGGrykh+CeBdu5Ie7OJwinCgQY0bc7GCRxy5xVHy+moaqkpL/jqQq0MtQOeYcrqEz4abc5f0KtU7W4A==} engines: {node: '>=12.5.0'} + combine-errors@3.0.3: + resolution: {integrity: sha512-C8ikRNRMygCwaTx+Ek3Yr+OuZzgZjduCOfSQBjbM8V3MfgcjSTeto/GXP6PAwKvJz/v15b7GHZvx5rOlczFw/Q==} + combined-stream@1.0.8: resolution: {integrity: sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==} engines: {node: '>= 0.8'} @@ -3175,6 +3194,9 @@ packages: csstype@3.1.3: resolution: {integrity: sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw==} + custom-error-instance@2.1.1: + resolution: {integrity: sha512-p6JFxJc3M4OTD2li2qaHkDCw9SfMw82Ldr6OC9Je1aXiGfhx2W8p3GaoeaGrPJTUN9NirTM/KTxHWMUdR1rsUg==} + data-uri-to-buffer@6.0.2: resolution: {integrity: sha512-7hvf7/GW8e86rW0ptuwS3OcBGDjIi6SZva7hCyWC0yYry2cOPmLIjXAUHI6DK2HsnwJd9ifmt57i8eV2n4YNpw==} engines: {node: '>= 14'} @@ -4115,6 +4137,9 @@ packages: resolution: {integrity: sha512-34wB/Y7MW7bzjKRjUKTa46I2Z7eV62Rkhva+KkopW7Qvv/OSWBqvkSY7vusOPrNuZcUG3tApvdVgNB8POj3SPw==} engines: {node: '>=10'} + js-base64@3.7.7: + resolution: {integrity: sha512-7rCnleh0z2CkXhH67J8K1Ytz0b2Y+yxTPL+/KOJoa20hfnVQ/3/T6W/KflYI4bRHRagNeXeU2bkNGI3v1oS/lw==} + js-tokens@4.0.0: resolution: {integrity: sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==} @@ -4215,28 +4240,24 @@ packages: engines: {node: '>= 12.0.0'} cpu: [arm64] os: [linux] - libc: [glibc] lightningcss-linux-arm64-musl@1.30.1: resolution: {integrity: sha512-jmUQVx4331m6LIX+0wUhBbmMX7TCfjF5FoOH6SD1CttzuYlGNVpA7QnrmLxrsub43ClTINfGSYyHe2HWeLl5CQ==} engines: {node: '>= 12.0.0'} cpu: [arm64] os: [linux] - libc: [musl] lightningcss-linux-x64-gnu@1.30.1: resolution: {integrity: sha512-piWx3z4wN8J8z3+O5kO74+yr6ze/dKmPnI7vLqfSqI8bccaTGY5xiSGVIJBDd5K5BHlvVLpUB3S2YCfelyJ1bw==} engines: {node: '>= 12.0.0'} cpu: [x64] os: [linux] - libc: [glibc] lightningcss-linux-x64-musl@1.30.1: resolution: {integrity: sha512-rRomAK7eIkL+tHY0YPxbc5Dra2gXlI63HL+v1Pdi1a3sC+tJTcFrHX+E86sulgAXeI7rSzDYhPSeHHjqFhqfeQ==} engines: {node: '>= 12.0.0'} cpu: [x64] os: [linux] - libc: [musl] lightningcss-win32-arm64-msvc@1.30.1: resolution: {integrity: sha512-mSL4rqPi4iXq5YVqzSsJgMVFENoa4nGTT/GjO2c0Yl9OuQfPsIfncvLrEW6RbbB24WtZ3xP/2CCmI3tNkNV4oA==} @@ -4269,6 +4290,24 @@ packages: resolution: {integrity: sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw==} engines: {node: '>=10'} + lodash._baseiteratee@4.7.0: + resolution: {integrity: sha512-nqB9M+wITz0BX/Q2xg6fQ8mLkyfF7MU7eE+MNBNjTHFKeKaZAPEzEg+E8LWxKWf1DQVflNEn9N49yAuqKh2mWQ==} + + lodash._basetostring@4.12.0: + resolution: {integrity: sha512-SwcRIbyxnN6CFEEK4K1y+zuApvWdpQdBHM/swxP962s8HIxPO3alBH5t3m/dl+f4CMUug6sJb7Pww8d13/9WSw==} + + lodash._baseuniq@4.6.0: + resolution: {integrity: sha512-Ja1YevpHZctlI5beLA7oc5KNDhGcPixFhcqSiORHNsp/1QTv7amAXzw+gu4YOvErqVlMVyIJGgtzeepCnnur0A==} + + lodash._createset@4.0.3: + resolution: {integrity: sha512-GTkC6YMprrJZCYU3zcqZj+jkXkrXzq3IPBcF/fIPpNEAB4hZEtXU8zp/RwKOvZl43NUmwDbyRk3+ZTbeRdEBXA==} + + lodash._root@3.0.1: + resolution: {integrity: sha512-O0pWuFSK6x4EXhM1dhZ8gchNtG7JMqBtrHdoUFUWXD7dJnNSUze1GuyQr5sOs0aCvgGeI3o/OJW8f4ca7FDxmQ==} + + lodash._stringtopath@4.8.0: + resolution: {integrity: sha512-SXL66C731p0xPDC5LZg4wI5H+dJo/EO4KTqOMwLYCH3+FmmfAKJEZCm6ohGpI+T1xwsDsJCfL4OnhorllvlTPQ==} + lodash.camelcase@4.3.0: resolution: {integrity: sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA==} @@ -4291,6 +4330,9 @@ packages: lodash.throttle@4.1.1: resolution: {integrity: sha512-wIkUCfVKpVsWo3JSZlc+8MB5it+2AN5W8J7YVMST30UrvcQNZ1Okbj+rbVniijTWE6FGYy4XJq/rHkas8qJMLQ==} + lodash.uniqby@4.5.0: + resolution: {integrity: sha512-IRt7cfTtHy6f1aRVA5n7kT8rgN3N1nH6MOWLcHfpWG2SH19E3JksLK38MktLxZDhlAjCP9jpIXkOnRXlu6oByQ==} + lodash@4.17.21: resolution: {integrity: sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==} @@ -4778,6 +4820,9 @@ packages: prop-types@15.8.1: resolution: {integrity: sha512-oj87CgZICdulUohogVAR7AjlC0327U4el4L6eAvOqCeudMDVU0NThNaV+b9Df4dXgSP1gXMTnPdhfe/2qDH5cg==} + proper-lockfile@4.1.2: + resolution: {integrity: sha512-TjNPblN4BwAWMXU8s9AEz4JmQxnD1NNL7bNOY/AKUzyamc379FWASUhc/K1pL2noVb+XmZKLL68cjzLsiOAMaA==} + proxy-agent@6.5.0: resolution: {integrity: sha512-TmatMXdr2KlRiA2CyDu8GqR8EjahTG3aY3nXjdzFyoZbmB8hrBsTyMezhULIXKnC0jpfjlmiZ3+EaCzoInSu/A==} engines: {node: '>= 14'} @@ -4797,6 +4842,9 @@ packages: resolution: {integrity: sha512-hh2WYhq4fi8+b+/2Kg9CEge4fDPvHS534aOOvOZeQ3+Vf2mCFsaFBYj0i+iXcAq6I9Vzp5fjMFBlONvayDC1qg==} engines: {node: '>=6'} + querystringify@2.2.0: + resolution: {integrity: sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ==} + queue-microtask@1.2.3: resolution: {integrity: sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==} @@ -4893,6 +4941,9 @@ packages: resolution: {integrity: sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==} engines: {node: '>=0.10.0'} + requires-port@1.0.0: + resolution: {integrity: sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==} + resolve-from@4.0.0: resolution: {integrity: sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g==} engines: {node: '>=4'} @@ -4917,6 +4968,10 @@ packages: resolution: {integrity: sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA==} engines: {node: '>=8'} + retry@0.12.0: + resolution: {integrity: sha512-9LkiTwjUh6rT555DtE9rTX+BKByPfrMzEAtnlEtdEwr3Nkffwiihqe2bWADg+OQRjt9gl6ICdmB/ZFDCGAtSow==} + engines: {node: '>= 4'} + reusify@1.1.0: resolution: {integrity: sha512-g6QUff04oZpHs0eG5p83rFLhHeV00ug/Yf9nZM6fLeUrPguBTkTQOdpAWWspMh55TZfVQDPaN3NQJfbVRAxdIw==} engines: {iojs: '>=1.0.0', node: '>=0.10.0'} @@ -5457,6 +5512,10 @@ packages: resolution: {integrity: sha512-iHuaNcq5GZZnr3XDZNuu2LSyCzAOPwDuo5Qt+q64DfsTP1i3T2bKfxJhni2ZQxsvAoxRbuUK5QetJki4qc5aYA==} hasBin: true + tus-js-client@4.3.1: + resolution: {integrity: sha512-ZLeYmjrkaU1fUsKbIi8JML52uAocjEZtBx4DKjRrqzrZa0O4MYwT6db+oqePlspV+FxXJAyFBc/L5gwUi2OFsg==} + engines: {node: '>=18'} + tw-animate-css@1.3.0: resolution: {integrity: sha512-jrJ0XenzS9KVuDThJDvnhalbl4IYiMQ/XvpA0a2FL8KmlK+6CSMviO7ROY/I7z1NnUs5NnDhlM6fXmF40xPxzw==} @@ -5564,6 +5623,9 @@ packages: uri-js@4.4.1: resolution: {integrity: sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==} + url-parse@1.5.10: + resolution: {integrity: sha512-WypcfiRhfeUP9vvF0j6rw0J3hrWrw6iZv3+22h6iRMJ/8z1Tj6XfLP4DsUix5MhMPnXpiHDoKyoZ/bdCkwBCiQ==} + use-callback-ref@1.3.3: resolution: {integrity: sha512-jQL3lRnocaFtu3V00JToYz/4QkNWswxijDaCVNZRiRTO3HQDLsdu1ZtmIUvV4yPp+rvWm5j0y0TG/S61cuijTg==} engines: {node: '>=10'} @@ -8401,6 +8463,8 @@ snapshots: buffer-crc32@0.2.13: {} + buffer-from@1.1.2: {} + buffer@5.7.1: dependencies: base64-js: 1.5.1 @@ -8586,6 +8650,11 @@ snapshots: color-string: 1.9.1 optional: true + combine-errors@3.0.3: + dependencies: + custom-error-instance: 2.1.1 + lodash.uniqby: 4.5.0 + combined-stream@1.0.8: dependencies: delayed-stream: 1.0.0 @@ -8699,6 +8768,8 @@ snapshots: csstype@3.1.3: {} + custom-error-instance@2.1.1: {} + data-uri-to-buffer@6.0.2: {} data-view-buffer@1.0.2: @@ -9866,6 +9937,8 @@ snapshots: joycon@3.1.1: {} + js-base64@3.7.7: {} + js-tokens@4.0.0: {} js-yaml@4.1.0: @@ -10003,6 +10076,25 @@ snapshots: dependencies: p-locate: 5.0.0 + lodash._baseiteratee@4.7.0: + dependencies: + lodash._stringtopath: 4.8.0 + + lodash._basetostring@4.12.0: {} + + lodash._baseuniq@4.6.0: + dependencies: + lodash._createset: 4.0.3 + lodash._root: 3.0.1 + + lodash._createset@4.0.3: {} + + lodash._root@3.0.1: {} + + lodash._stringtopath@4.8.0: + dependencies: + lodash._basetostring: 4.12.0 + lodash.camelcase@4.3.0: {} lodash.defaults@4.2.0: {} @@ -10017,6 +10109,11 @@ snapshots: lodash.throttle@4.1.1: {} + lodash.uniqby@4.5.0: + dependencies: + lodash._baseiteratee: 4.7.0 + lodash._baseuniq: 4.6.0 + lodash@4.17.21: {} log-symbols@3.0.0: @@ -10515,6 +10612,12 @@ snapshots: object-assign: 4.1.1 react-is: 16.13.1 + proper-lockfile@4.1.2: + dependencies: + graceful-fs: 4.2.11 + retry: 0.12.0 + signal-exit: 3.0.7 + proxy-agent@6.5.0: dependencies: agent-base: 7.1.3 @@ -10543,6 +10646,8 @@ snapshots: split-on-first: 1.1.0 strict-uri-encode: 2.0.0 + querystringify@2.2.0: {} + queue-microtask@1.2.3: {} quick-lru@7.0.1: {} @@ -10646,6 +10751,8 @@ snapshots: require-directory@2.1.1: {} + requires-port@1.0.0: {} + resolve-from@4.0.0: {} resolve-from@5.0.0: {} @@ -10669,6 +10776,8 @@ snapshots: onetime: 5.1.2 signal-exit: 3.0.7 + retry@0.12.0: {} + reusify@1.1.0: {} rimraf@3.0.2: @@ -11299,6 +11408,16 @@ snapshots: turbo-windows-64: 2.5.3 turbo-windows-arm64: 2.5.3 + tus-js-client@4.3.1: + dependencies: + buffer-from: 1.1.2 + combine-errors: 3.0.3 + is-stream: 2.0.1 + js-base64: 3.7.7 + lodash.throttle: 4.1.1 + proper-lockfile: 4.1.2 + url-parse: 1.5.10 + tw-animate-css@1.3.0: {} type-check@0.4.0: @@ -11409,6 +11528,11 @@ snapshots: dependencies: punycode: 2.3.1 + url-parse@1.5.10: + dependencies: + querystringify: 2.2.0 + requires-port: 1.0.0 + use-callback-ref@1.3.3(@types/react@19.1.5)(react@19.1.0): dependencies: react: 19.1.0 diff --git a/test-all-creds.js b/test-all-creds.js new file mode 100644 index 0000000..3ab7964 --- /dev/null +++ b/test-all-creds.js @@ -0,0 +1,103 @@ +const http = require('http'); + +// 测试不同的凭据组合 +const credentialsList = [ + { + name: 'Docker环境变量凭据 (nice1234)', + accessKey: 'nice1234', + secretKey: 'nice1234', + }, + { + name: 'MinIO默认凭据', + accessKey: 'minioadmin', + secretKey: 'minioadmin', + }, + { + name: '你创建的新AccessKey', + accessKey: '7Nt7OyHkwIoo3zvSKdnc', + secretKey: 'EZ0cyrjJAsabTLNSqWcU47LURMppBW2kka3LuXzb', + }, +]; + +async function testCredentials(accessKey, secretKey) { + const options = { + hostname: 'localhost', + port: 9000, + path: '/?list-type=2', // 列出objects + method: 'GET', + headers: { + Host: 'localhost:9000', + Authorization: `AWS ${accessKey}:fakesignature`, // 简化测试 + }, + }; + + return new Promise((resolve, reject) => { + const req = http.request(options, (res) => { + let data = ''; + res.on('data', (chunk) => (data += chunk)); + res.on('end', () => { + resolve({ + statusCode: res.statusCode, + data: data, + headers: res.headers, + }); + }); + }); + + req.on('error', reject); + req.setTimeout(3000, () => { + req.destroy(); + reject(new Error('请求超时')); + }); + req.end(); + }); +} + +async function main() { + console.log('🔍 测试所有可能的MinIO凭据...\n'); + + for (const { name, accessKey, secretKey } of credentialsList) { + console.log(`📱 测试 ${name}:`); + console.log(` Access Key: ${accessKey}`); + console.log(` Secret Key: ${secretKey.substring(0, 8)}...`); + + try { + const result = await testCredentials(accessKey, secretKey); + console.log(` 状态码: ${result.statusCode}`); + + if (result.statusCode === 403) { + if (result.data.includes('SignatureDoesNotMatch')) { + console.log(' 🔐 签名错误 (但认证方式正确)'); + } else if (result.data.includes('InvalidAccessKeyId')) { + console.log(' ❌ AccessKey无效'); + } else { + console.log(' 🔐 权限被拒绝'); + } + } else if (result.statusCode === 200) { + console.log(' ✅ 认证成功!'); + } else { + console.log(` ⚠️ 未知状态: ${result.statusCode}`); + } + + // 显示错误详情 + if (result.data.includes('')) { + const codeMatch = result.data.match(/([^<]+)<\/Code>/); + const messageMatch = result.data.match(/([^<]+)<\/Message>/); + if (codeMatch && messageMatch) { + console.log(` 错误: ${codeMatch[1]} - ${messageMatch[1]}`); + } + } + } catch (error) { + console.log(` ❌ 连接失败: ${error.message}`); + } + + console.log(''); // 空行分隔 + } + + console.log('💡 建议:'); + console.log('1. 如果Docker凭据有效,更新应用配置使用 nice1234/nice1234'); + console.log('2. 如果新AccessKey有效,确保它有正确的权限'); + console.log('3. 可以通过MinIO控制台 (http://localhost:9001) 管理用户和权限'); +} + +main().catch(console.error); diff --git a/test-correct-creds.js b/test-correct-creds.js new file mode 100644 index 0000000..bd80d91 --- /dev/null +++ b/test-correct-creds.js @@ -0,0 +1,127 @@ +// 在项目内运行,可以使用现有的AWS SDK依赖 +process.chdir('./packages/storage'); + +async function testWithCorrectCreds() { + console.log('🔍 使用正确的MinIO凭据测试...\n'); + + // 动态导入AWS SDK + const { S3 } = await import('@aws-sdk/client-s3'); + + const config = { + endpoint: 'http://localhost:9000', + region: 'us-east-1', + credentials: { + accessKeyId: 'nice1234', // Docker环境变量设置的凭据 + secretAccessKey: 'nice1234', + }, + forcePathStyle: true, + }; + + console.log('配置信息:'); + console.log('- Endpoint:', config.endpoint); + console.log('- Region:', config.region); + console.log('- Access Key:', config.credentials.accessKeyId); + console.log('- Force Path Style:', config.forcePathStyle); + console.log(); + + const s3Client = new S3(config); + + try { + // 1. 测试基本连接 + console.log('📡 测试基本连接...'); + const buckets = await s3Client.listBuckets(); + console.log('✅ 连接成功!'); + console.log('📂 现有存储桶:', buckets.Buckets?.map((b) => b.Name) || []); + console.log(); + + // 2. 检查test123存储桶 + const bucketName = 'test123'; + console.log(`🪣 检查存储桶 "${bucketName}"...`); + + try { + await s3Client.headBucket({ Bucket: bucketName }); + console.log(`✅ 存储桶 "${bucketName}" 存在`); + } catch (error) { + if (error.name === 'NotFound') { + console.log(`❌ 存储桶 "${bucketName}" 不存在,正在创建...`); + try { + await s3Client.createBucket({ Bucket: bucketName }); + console.log(`✅ 存储桶 "${bucketName}" 创建成功`); + } catch (createError) { + console.log(`❌ 创建存储桶失败:`, createError.message); + return; + } + } else { + console.log(`❌ 检查存储桶失败:`, error.message); + return; + } + } + + // 3. 测试简单上传 + console.log('\n📤 测试简单上传...'); + const testKey = 'test-file.txt'; + const testContent = 'Hello MinIO from correct credentials!'; + + try { + await s3Client.putObject({ + Bucket: bucketName, + Key: testKey, + Body: testContent, + }); + console.log(`✅ 简单上传成功: ${testKey}`); + } catch (error) { + console.log(`❌ 简单上传失败:`, error.message); + console.log('错误详情:', error); + return; + } + + // 4. 测试分片上传初始化 + console.log('\n🔄 测试分片上传初始化...'); + const multipartKey = 'test-multipart.txt'; + + try { + const multipartUpload = await s3Client.createMultipartUpload({ + Bucket: bucketName, + Key: multipartKey, + }); + console.log(`✅ 分片上传初始化成功: ${multipartUpload.UploadId}`); + + // 立即取消这个分片上传 + await s3Client.abortMultipartUpload({ + Bucket: bucketName, + Key: multipartKey, + UploadId: multipartUpload.UploadId, + }); + console.log('✅ 分片上传取消成功'); + } catch (error) { + console.log(`❌ 分片上传初始化失败:`, error.message); + console.log('错误详情:', error); + if (error.$metadata) { + console.log('HTTP状态码:', error.$metadata.httpStatusCode); + } + return; + } + + console.log('\n🎉 所有测试通过!MinIO配置正确。'); + console.log('\n📝 下一步: 更新你的.env文件使用以下配置:'); + console.log('STORAGE_TYPE=s3'); + console.log('S3_ENDPOINT=http://localhost:9000'); + console.log('S3_REGION=us-east-1'); + console.log('S3_BUCKET=test123'); + console.log('S3_ACCESS_KEY_ID=nice1234'); + console.log('S3_SECRET_ACCESS_KEY=nice1234'); + console.log('S3_FORCE_PATH_STYLE=true'); + } catch (error) { + console.log('❌ 连接失败:', error.message); + console.log('错误详情:', error); + + if (error.message.includes('ECONNREFUSED')) { + console.log('\n💡 提示:'); + console.log('- 确保MinIO正在端口9000运行'); + console.log('- 检查docker容器状态: docker ps'); + console.log('- 重启MinIO: docker restart minio-container-name'); + } + } +} + +testWithCorrectCreds().catch(console.error); diff --git a/test-default-creds.js b/test-default-creds.js new file mode 100644 index 0000000..d62151d --- /dev/null +++ b/test-default-creds.js @@ -0,0 +1,69 @@ +const { S3 } = require('@aws-sdk/client-s3'); + +async function testWithDefaultCreds() { + console.log('🔍 测试MinIO默认凭据...\n'); + + const configs = [ + { + name: 'MinIO 默认凭据', + config: { + endpoint: 'http://localhost:9000', + region: 'us-east-1', + credentials: { + accessKeyId: 'minioadmin', + secretAccessKey: 'minioadmin', + }, + forcePathStyle: true, + }, + }, + { + name: '你的自定义凭据', + config: { + endpoint: 'http://localhost:9000', + region: 'us-east-1', + credentials: { + accessKeyId: '7Nt7OyHkwIoo3zvSKdnc', + secretAccessKey: 'EZ0cyrjJAsabTLNSqWcU47LURMppBW2kka3LuXzb', + }, + forcePathStyle: true, + }, + }, + ]; + + for (const { name, config } of configs) { + console.log(`\n📱 测试 ${name}:`); + console.log(` Access Key: ${config.credentials.accessKeyId}`); + console.log(` Secret Key: ${config.credentials.secretAccessKey.substring(0, 8)}...`); + + const s3Client = new S3(config); + + try { + // 测试列出buckets + const result = await s3Client.listBuckets(); + console.log(` ✅ 连接成功!`); + console.log(` 📂 现有buckets:`, result.Buckets?.map((b) => b.Name) || []); + + // 测试创建bucket + const bucketName = 'test123'; + try { + await s3Client.headBucket({ Bucket: bucketName }); + console.log(` ✅ Bucket "${bucketName}" 已存在`); + } catch (error) { + if (error.name === 'NotFound') { + console.log(` 📦 创建bucket "${bucketName}"...`); + await s3Client.createBucket({ Bucket: bucketName }); + console.log(` ✅ Bucket "${bucketName}" 创建成功`); + } else { + throw error; + } + } + } catch (error) { + console.log(` ❌ 连接失败:`, error.message); + if (error.$metadata?.httpStatusCode) { + console.log(` 📊 HTTP状态码:`, error.$metadata.httpStatusCode); + } + } + } +} + +testWithDefaultCreds().catch(console.error); diff --git a/test-minio-curl.sh b/test-minio-curl.sh new file mode 100644 index 0000000..f8f0712 --- /dev/null +++ b/test-minio-curl.sh @@ -0,0 +1,28 @@ +#!/bin/bash + +echo "🔍 测试MinIO连接..." + +# 测试1: 默认凭据 +echo -e "\n📱 测试MinIO默认凭据 (minioadmin/minioadmin):" +curl -s -w "HTTP状态码: %{http_code}\n" \ + -H "Host: localhost:9000" \ + -H "Authorization: AWS minioadmin:signature" \ + http://localhost:9000/ | head -5 + +# 测试2: 无认证访问根路径 +echo -e "\n🌐 测试无认证访问:" +curl -s -w "HTTP状态码: %{http_code}\n" http://localhost:9000/ | head -3 + +# 测试3: 检查MinIO管理界面 +echo -e "\n🖥️ 测试MinIO控制台:" +curl -s -w "HTTP状态码: %{http_code}\n" -I http://localhost:9001/ | grep -E "(HTTP|Server|Content-Type)" + +echo -e "\n💡 提示:" +echo "1. 如果你使用Docker运行MinIO,检查环境变量MINIO_ROOT_USER和MINIO_ROOT_PASSWORD" +echo "2. 默认凭据通常是 minioadmin/minioadmin" +echo "3. 如果修改了凭据,请更新配置文件" + +echo -e "\n🐳 Docker命令参考:" +echo "查看MinIO容器: docker ps | grep minio" +echo "查看容器日志: docker logs " +echo "检查环境变量: docker inspect | grep -A 10 Env" \ No newline at end of file diff --git a/test-minio-simple.js b/test-minio-simple.js new file mode 100644 index 0000000..2419f8e --- /dev/null +++ b/test-minio-simple.js @@ -0,0 +1,163 @@ +const https = require('https'); +const http = require('http'); +const crypto = require('crypto'); + +// MinIO配置 +const config = { + endpoint: 'localhost:9000', + accessKeyId: '7Nt7OyHkwIoo3zvSKdnc', + secretAccessKey: 'EZ0cyrjJAsabTLNSqWcU47LURMppBW2kka3LuXzb', + bucket: 'test123', +}; + +// 生成AWS签名v4 +function generateSignature(method, path, headers, body, date) { + const region = 'us-east-1'; + const service = 's3'; + + // 创建规范请求 + const canonicalRequest = [ + method, + path, + '', // query string + Object.keys(headers) + .sort() + .map((key) => `${key.toLowerCase()}:${headers[key]}`) + .join('\n'), + '', + Object.keys(headers) + .sort() + .map((key) => key.toLowerCase()) + .join(';'), + crypto.createHash('sha256').update(body).digest('hex'), + ].join('\n'); + + // 创建字符串待签名 + const stringToSign = [ + 'AWS4-HMAC-SHA256', + date.toISOString().replace(/[:\-]|\.\d{3}/g, ''), + date.toISOString().substr(0, 10).replace(/-/g, '') + '/' + region + '/' + service + '/aws4_request', + crypto.createHash('sha256').update(canonicalRequest).digest('hex'), + ].join('\n'); + + // 计算签名 + const kDate = crypto + .createHmac('sha256', 'AWS4' + config.secretAccessKey) + .update(date.toISOString().substr(0, 10).replace(/-/g, '')) + .digest(); + const kRegion = crypto.createHmac('sha256', kDate).update(region).digest(); + const kService = crypto.createHmac('sha256', kRegion).update(service).digest(); + const kSigning = crypto.createHmac('sha256', kService).update('aws4_request').digest(); + const signature = crypto.createHmac('sha256', kSigning).update(stringToSign).digest('hex'); + + return signature; +} + +// 测试基本连接 +async function testConnection() { + console.log('🔍 测试MinIO基本连接...\n'); + + const options = { + hostname: 'localhost', + port: 9000, + path: '/', + method: 'GET', + }; + + return new Promise((resolve, reject) => { + const req = http.request(options, (res) => { + console.log(`状态码: ${res.statusCode}`); + console.log(`响应头:`, res.headers); + + let data = ''; + res.on('data', (chunk) => (data += chunk)); + res.on('end', () => { + console.log('响应内容:', data); + resolve({ statusCode: res.statusCode, data }); + }); + }); + + req.on('error', reject); + req.end(); + }); +} + +// 测试bucket列表 +async function testListBuckets() { + console.log('\n📂 测试列出bucket...\n'); + + const date = new Date(); + const headers = { + Host: config.endpoint, + 'X-Amz-Date': date.toISOString().replace(/[:\-]|\.\d{3}/g, ''), + Authorization: `AWS4-HMAC-SHA256 Credential=${config.accessKeyId}/${date.toISOString().substr(0, 10).replace(/-/g, '')}/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-date, Signature=placeholder`, + }; + + const options = { + hostname: 'localhost', + port: 9000, + path: '/', + method: 'GET', + headers: headers, + }; + + return new Promise((resolve, reject) => { + const req = http.request(options, (res) => { + console.log(`状态码: ${res.statusCode}`); + console.log(`响应头:`, res.headers); + + let data = ''; + res.on('data', (chunk) => (data += chunk)); + res.on('end', () => { + console.log('响应内容:', data); + resolve({ statusCode: res.statusCode, data }); + }); + }); + + req.on('error', reject); + req.end(); + }); +} + +// 测试创建bucket +async function testCreateBucket() { + console.log(`\n🪣 测试创建bucket: ${config.bucket}...\n`); + + const options = { + hostname: 'localhost', + port: 9000, + path: `/${config.bucket}`, + method: 'PUT', + }; + + return new Promise((resolve, reject) => { + const req = http.request(options, (res) => { + console.log(`状态码: ${res.statusCode}`); + console.log(`响应头:`, res.headers); + + let data = ''; + res.on('data', (chunk) => (data += chunk)); + res.on('end', () => { + console.log('响应内容:', data); + resolve({ statusCode: res.statusCode, data }); + }); + }); + + req.on('error', reject); + req.end(); + }); +} + +async function main() { + try { + await testConnection(); + await testListBuckets(); + await testCreateBucket(); + + console.log('\n✅ 测试完成!'); + } catch (error) { + console.error('❌ 测试失败:', error.message); + } +} + +main();