耗尽两天出坑,整理过程如下,希望对遇到问题的人得到帮助!!!

首先nodejs在大模型生态上,坑还是超级多,尤其是对我不熟悉nodejs。

我没有从零创建项目,比如用npm init 方法,而是使用的一个开源项目:

git clone https://github.com/langchain-ai/langchain-nextjs-template.git

基于这个项目本身pnpm dev 页面显示正常,然后创建lib目录,进行ts代码开发。

前提知识:

1. 需要了解transformer js 项目,这是transformer 的js版本,nodejs server端用ts代码实现,无缝集成langchain的话,需要用这个版本。(另外看到有node集成python的,最好选择这个方案,会跟LLM融合很轻松,我以后有时间再学习)

2.需要了解langchainjs 项目,在GitHub上有项目地址,这里面集成了第一项的hf_transformer 。主要还是第一项必须得调试过。

需要提前 pnpm i @xenova/transformers

3.transformer 项目要加载onnx模型,不能是pytoch模型,所以,翻山越岭,还要解决这一关。总之,要求的技术栈真多。

问题一: 项目在commonjs 和 ES 兼容问题

 @xenova/transformers 项目的type是module, 如果主项目是commonjs,会出现不兼容问题。

问题二: 我调试ts代码的时候,使用的ts-node,遇到了两类典型问题:

① import {} from modulefile 不支持

D:\code\langchain\test>ts-node llama_cpp_basic.ts
C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\index.js:851
            return old(m, filename);
                   ^
Error [ERR_REQUIRE_ESM]: require() of ES Module D:\code\langchain\test\node_modules\.pnpm\node-llama-cpp@2.8.0\node_modules\node-llama-cpp\dist\index.js from D:\code\langchain\test\node_modules\.pnpm\langchain@0.0.200_node-llama-cpp@2.8.0\node_modules\langchain\dist\util\llama_cpp.cjs not supported.
Instead change the require of index.js in D:\code\langchain\test\node_modules\.pnpm\langchain@0.0.200_node-llama-cpp@2.8.0\node_modules\langchain\dist\util\llama_cpp.cjs to a dynamic import() which is available in all CommonJS modules.
    at require.extensions.<computed> [as .js] (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\index.js:851:20)
    at Object.<anonymous> (D:\code\langchain\test\node_modules\.pnpm\langchain@0.0.200_node-llama-cpp@2.8.0\node_modules\langchain\dist\util\llama_cpp.cjs:4:26)
    at require.extensions.<computed> [as .js] (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\index.js:851:20)
    at Object.<anonymous> (D:\code\langchain\test\node_modules\.pnpm\langchain@0.0.200_node-llama-cpp@2.8.0\node_modules\langchain\dist\embeddings\llama_cpp.cjs:4:24)
    at require.extensions.<computed> [as .js] (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\index.js:851:20)
    at Object.<anonymous> (D:\code\langchain\test\node_modules\.pnpm\langchain@0.0.200_node-llama-cpp@2.8.0\node_modules\langchain\embeddings\llama_cpp.cjs:1:18)
    at require.extensions.<computed> [as .js] (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\index.js:851:20)
    at Object.<anonymous> (D:\code\langchain\test\llama_cpp_basic.ts:3:21)
    at m._compile (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\index.js:857:29)
    at require.extensions.<computed> [as .ts] (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\index.js:859:16)
    at phase4 (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\bin.js:466:20)
    at bootstrap (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\bin.js:54:12)
    at main (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\bin.js:33:12)
    at Object.<anonymous> (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\dist\bin.js:579:5) {
  code: 'ERR_REQUIRE_ESM'
}

② MODULE_NOT_FUND,在测试langchainjs example时报错

D:\code\langchain\test>ts-node llama_cpp_basic.ts
Error: Cannot find module 'node-llama-cpp'
Require stack:
- D:\code\langchain\test\node_modules\.pnpm\langchain@0.0.200\node_modules\langchain\dist\util\llama_cpp.cjs
- D:\code\langchain\test\node_modules\.pnpm\langchain@0.0.200\node_modules\langchain\dist\embeddings\llama_cpp.cjs
- D:\code\langchain\test\node_modules\.pnpm\langchain@0.0.200\node_modules\langchain\embeddings\llama_cpp.cjs
- D:\code\langchain\test\llama_cpp_basic.ts
    at Function.Module._resolveFilename (node:internal/modules/cjs/loader:1048:15)
    at Function.Module._resolveFilename.sharedData.moduleResolveFilenameHook.installedValue [as _resolveFilename] (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\node_modules\@cspotcode\source-map-support\source-map-support.js:811:30)
    at Function.Module._load (node:internal/modules/cjs/loader:901:27)
    at Module.require (node:internal/modules/cjs/loader:1115:19)
    at require (node:internal/modules/helpers:130:18)
    at Object.<anonymous> (D:\code\langchain\test\node_modules\.pnpm\langchain@0.0.200\node_modules\langchain\dist\util\llama_cpp.cjs:4:26)
    at Module._compile (node:internal/modules/cjs/loader:1241:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1295:10)
    at Object.require.extensions.<computed> [as .js] (C:\Users\baron\AppData\Roaming\npm\node_modules\ts-node\src\index.ts:1608:43)
    at Module.load (node:internal/modules/cjs/loader:1091:32) {
  code: 'MODULE_NOT_FOUND',
  requireStack: [
    'D:\\code\\langchain\\test\\node_modules\\.pnpm\\langchain@0.0.200\\node_modules\\langchain\\dist\\util\\llama_cpp.cjs',
    'D:\\code\\langchain\\test\\node_modules\\.pnpm\\langchain@0.0.200\\node_modules\\langchain\\dist\\embeddings\\llama_cpp.cjs',
    'D:\\code\\langchain\\test\\node_modules\\.pnpm\\langchain@0.0.200\\node_modules\\langchain\\embeddings\\llama_cpp.cjs',
    'D:\\code\\langchain\\test\\llama_cpp_basic.ts'
  ]
}

这个问题遇到多次,经常忽视第一行提示,瞎找答案

D:\code\langchain\test>pnpm i node-llama-cpp
Packages: +151 -1
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
Progress: resolved 219, reused 162, downloaded 57, added 151, done
Downloading registry.npmmirror.com/node-llama-cpp/2.8.0: 8.13 MB/8.13 MB, done
node_modules/.pnpm/node-llama-cpp@2.8.0/node_modules/node-llama-cpp: Running postinstall script, done in 345ms

dependencies:
+ node-llama-cpp 2.8.0

Done in 5.7s

Error : modulefile 不存在,其实那个ts文件就在那

③ ts文件扩展名不识别

解决办法:

ts-node 在node v20 and above,不支持(从18v 升到21v,才发现,又降到18v)

ts-node  --esm 无效

node --load/ts-node-esm 方式无效

最后:发现更好用的工具tsx

安装:npm install tsx -g 。建议进行全局安装

在package.json 中scripts 部分,增加:

 "tsx": "tsx .\\lib\\embedding.ts",

调试ts文件得以解决

问题三   报connect outtime 问题,这个我熟,hf模型需要连接网络进行下载。hf国内根本访问不了,翻墙也不行。需要用本地模型。

import { HuggingFaceTransformersEmbeddings } from "langchain/embeddings/hf_transformers";

const embs = new HuggingFaceTransformersEmbeddings({modelName:"bge-large-en"})

const docs = "doc"

console.log(embs.embedQuery(docs))

console.log("tttt")

上面代码怎么配置本地模型和禁止联网呢?参考下面代码,在上面代码头上增加。

import { env } from '@xenova/transformers';

// Specify a custom location for models (defaults to '/models/').
env.localModelPath = 'D:/code/embedding/';

// Disable the loading of remote models from the Hugging Face Hub:
env.allowRemoteModels = false;

// Set location of .wasm files. Defaults to use a CDN.
// env.backends.onnx.wasm.wasmPaths = '/path/to/files/';

ok,加载本地模型的方式解决了。需要注意,transformer支持的模型是有清单的,参考项目下的scripts/support_models.py文件,很多中文模型都不支持,如bge-large-zh,是否配置后就可以,需要确认。

问题三、Error: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at "D:/code/embedding/bge-large-en/onnx/model_quantized.onnx".

加载模型,从云端下载的pt版本的是不行的,需要onnx,统一标准模型(跨平台)

坑1:使用transformer项目中scripts目录下的convert.py

python -m convert --quantize --model_id bge-large-en

上面是transfomer项目readme的方法,行不通!!一开始我以为python 的版本是3.11可能太高,重新创建了3.9的虚拟环境,错误依旧。

报如下错误:

(embedding_3.9) D:\code\embedding>python -m convert --quantize --model_id bge-large-en
Framework not specified. Using pt to export to ONNX.
Traceback (most recent call last):
  File "C:\Users\baron\anaconda3\envs\embedding_3.9\lib\runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\baron\anaconda3\envs\embedding_3.9\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "D:\code\embedding\convert.py", line 396, in <module>
    main()
  File "D:\code\embedding\convert.py", line 357, in main
    main_export(**export_kwargs)
  File "C:\Users\baron\anaconda3\envs\embedding_3.9\lib\site-packages\optimum\exporters\onnx\__main__.py", line 313, in main_export
    task = TasksManager.infer_task_from_model(model_name_or_path)
  File "C:\Users\baron\anaconda3\envs\embedding_3.9\lib\site-packages\optimum\exporters\tasks.py", line 1453, in infer_task_from_model
    task = cls._infer_task_from_model_name_or_path(model, subfolder=subfolder, revision=revision)
  File "C:\Users\baron\anaconda3\envs\embedding_3.9\lib\site-packages\optimum\exporters\tasks.py", line 1377, in _infer_task_from_model_name_or_path
    raise RuntimeError("Cannot infer the task from a local directory yet, please specify the task manually.")
RuntimeError: Cannot infer the task from a local directory yet, please specify the task manually.

分析了一通源代码,没有搞定。直接另辟道路,寻找通用的pt模型转onnx,解决了!!

参考:Huggingface:导出transformers模型到onnx-腾讯云开发者社区-腾讯云 (tencent.com)

因为用python进行模型转换,需要python环境,安装transformer.onnx 

note:cmd窗口需要管理员权限运行,否则出错。

pip install transformers.onnx 

(embedding_3.9) D:\code\embedding>python -m transformers.onnx --model=bge-large-en onnx/
Framework not specified. Using pt to export to ONNX.
Using the export variant default. Available variants are:
        - default: The default ONNX variant.
Using framework PyTorch: 2.1.1+cpu
Overriding 1 configuration item(s)
        - use_cache -> False
Post-processing the exported models...
Deduplicating shared (tied) weights...
Validating ONNX model onnx/model.onnx...
        -[✓] ONNX model output names match reference model (last_hidden_state)
        - Validating ONNX Model output "last_hidden_state":
                -[✓] (2, 16, 1024) matches (2, 16, 1024)
                -[✓] all values close (atol: 0.0001)
The ONNX export succeeded and the exported model was saved at: onnx
The export was done by optimum.exporters.onnx. We recommend using to use this package directly in future, as transformers.onnx is deprecated, and will be removed in v5. You can find more information here: https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model.

onnx/ 是模型导出的目录,注意:将这个目录要copy到原模型目录下,因为之前提示错误都是找模型的子目录的onnx路径。同时要注意模型文件model.onnx-》model_quantized.onnx需要修改,   否则报下面错误:


Error: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at "D:/code/embedding/bge-large-en/onnx/model_quantized.onnx".
    at getModelFile (d:\code\langchain\langchain-nextjs-template\node_modules\.pnpm\@xenova+transformers@2.9.0\node_modules\@xenova\transformers\src\utils\hub.js:461:27)
    at constructSession (d:\code\langchain\langchain-nextjs-template\node_modules\.pnpm\@xenova+transformers@2.9.0\node_modules\@xenova\transformers\src\models.js:126:18)
    at async Promise.all (index 1)
    at Function.from_pretrained (d:\code\langchain\langchain-nextjs-template\node_modules\.pnpm\@xenova+transformers@2.9.0\node_modules\@xenova\transformers\src\models.js:782:20)
    at Function.from_pretrained (d:\code\langchain\langchain-nextjs-template\node_modules\.pnpm\@xenova+transformers@2.9.0\node_modules\@xenova\transformers\src\models.js:4082:20)
    at async Promise.all (index 1)
    at loadItems (d:\code\langchain\langchain-nextjs-template\node_modules\.pnpm\@xenova+transformers@2.9.0\node_modules\@xenova\transformers\src\pipelines.js:2568:5)
    at pipeline (d:\code\langchain\langchain-nextjs-template\node_modules\.pnpm\@xenova+transformers@2.9.0\node_modules\@xenova\transformers\src\pipelines.js:2514:19)
    at async HuggingFaceTransformersEmbeddings.runEmbedding (D:\code\langchain\langchain-nextjs-template\node_modules\.pnpm\langchain@0.0.187_@supabase+supabase-js@2.39.0_@xenova+transformers@2.9.0\node_modules\langchain\dist\embeddings\hf_transformers.cjs:64:22)
    at async HuggingFaceTransformersEmbeddings.embedQuery (D:\code\langchain\langchain-nextjs-template\node_modules\.pnpm\langchain@0.0.187_@supabase+supabase-js@2.39.0_@xenova+transformers@2.9.0\node_modules\langchain\dist\embeddings\hf_transformers.cjs:58:22)

Node.js v20.9.0
 ELIFECYCLE  Command failed with exit code 1.

处理完后,再执行pnpm tsx ,成功!!!!!!

(base) PS D:\code\langchain\langchain-nextjs-template> pnpm tsx

> langchain-nextjs-template@0.0.0 tsx D:\code\langchain\langchain-nextjs-template
> tsx .\lib\embedding.ts

Promise { <pending> }
tttt

问题四: 真正编写代码遇到await错误

修改代码增加await,报如下错误 :

[esbuild Error]: Top-level await is currently not supported with the "cjs" output format
    at D:\code\langchain\langchain-nextjs-template\lib\embedding.ts:19:12
 ELIFECYCLE  Command failed with exit code 1.

解决方法:

1.修改tsconfig.json,target 和 module

{
  "compilerOptions": {
    "target": "esnext", // ①从es5 改成esnext
    "lib": ["dom", "dom.iterable", "esnext"],
    "allowJs": true,
    "skipLibCheck": true,
    "strict": true,
    "forceConsistentCasingInFileNames": true,
    "noEmit": true,
    "esModuleInterop": true,
    "module": "esnext", //②修改这个
    "moduleResolution": "bundler",
    "resolveJsonModule": true,
    "isolatedModules": true,
    "jsx": "preserve",
    "incremental": true,
    "plugins": [
      {
        "name": "next"
      }
    ],
    "paths": {
      "@/*": ["./*"]
    }
  },
  "include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
  "exclude": ["node_modules"]
}

2.修改package.json,修改type

{
  "name": "langchain-nextjs-template",
  "version": "0.0.0",
  "private": true,
  "type": "module",
  "scripts": {
    "dev": "next dev",
    "tsx": "tsx .\\lib\\embedding.ts",
    "build": "next build",
    "start": "next start",
    "lint": "next lint",
    "format": "prettier --write \"app\""
  },
  "engines": {
    "node": ">=18"
  },
  "dependencies": {
    "@next/bundle-analyzer": "^13.4.19",
    "@supabase/supabase-js": "^2.32.0",
    "@types/node": "20.4.5",
    "@types/react": "18.2.17",
    "@types/react-dom": "18.2.7",
    "@xenova/transformers": "^2.9.0",
    "ai": "^2.1.28",
    "autoprefixer": "10.4.14",
    "eslint": "8.46.0",
    "eslint-config-next": "13.4.12",
    "langchain": "^0.0.187",
    "next": "14.0.1",
    "postcss": "8.4.27",
    "react": "18.2.0",
    "react-dom": "18.2.0",
    "react-toastify": "^9.1.3",
    "tailwindcss": "3.3.3",
    "typescript": "5.1.6",
    "zod": "^3.22.3",
    "zod-to-json-schema": "^3.21.4"
  },
  "devDependencies": {
    "prettier": "3.0.0",
    "tsx": "^4.6.2"
  }
}

正式代码如下:embedding.ts

import { env } from '@xenova/transformers';

// Specify a custom location for models (defaults to '/models/').
env.localModelPath = 'D:/code/embedding/';
// Disable the loading of remote models from the Hugging Face Hub:
env.allowRemoteModels = false;
// Set location of .wasm files. Defaults to use a CDN.
// env.backends.onnx.wasm.wasmPaths = '/path/to/files/';


import { HuggingFaceTransformersEmbeddings } from "langchain/embeddings/hf_transformers";
import { LlamaCppEmbeddings } from "langchain/embeddings/llama_cpp";

const embs = new HuggingFaceTransformersEmbeddings({modelName:"bge-large-en"})
const docs = "doc"
console.log( await embs.embedQuery(docs))

输出内容:

(base) PS D:\code\langchain\langchain-nextjs-template> pnpm tsx

> langchain-nextjs-template@0.0.0 tsx D:\code\langchain\langchain-nextjs-template
> tsx .\lib\embedding.ts

[
    0.003330473555251956,   -0.02205600030720234,      0.0157240591943264,
    0.019194256514310837,   -0.04094541072845459,    -0.04336366057395935,
    0.007914731279015541,    0.01676217094063759, -0.00026966544101014733,
     0.04427381977438927,   0.036663103848695755,   0.0008914328063838184,
     0.03049340285360813,  -0.014682380482554436,    -0.01731256954371929,
    0.021954838186502457, -0.0004227317695040256,   -0.027069522067904472,
   -0.047488000243902206,    0.00778471864759922,   -0.008551156148314476,
    0.002418451476842165,  -0.054124195128679276,    0.002399906050413847,
    -0.01912163756787777,   0.029669228941202164,   -0.013196144253015518,
     0.02986353449523449,    0.06598247587680817,     0.05015171319246292,
   -0.032113343477249146,  -0.023680096492171288,    0.012189813889563084,
   -0.024729931727051735,  -0.012590361759066582,   -0.028836917132139206,
     0.03547510504722595,   -0.03484269976615906,     0.01825197972357273,
   -0.037200022488832474,   -0.00193393521476537,    -0.02055542729794979,
      0.0485348254442215,  -0.042570747435092926,    -0.07042783498764038,
  -0.0019728969782590866,  -0.029526256024837494,    -0.04461062327027321,
    0.013710870407521725,   -0.03797616809606552,    0.025837218388915062,
   -0.013484465889632702,    0.01923159696161747,    -0.01916288211941719,
    0.023215124383568764,  -0.009301901794970036,   -0.006729026325047016,
    0.004620965104550123,  -0.025768844410777092,    0.009178749285638332,
    0.004959911108016968,   -0.01205375324934721,    0.012440511025488377,
    -0.03975430876016617,   0.012500863522291183,    0.011405334807932377,
    -0.00874609500169754,  -0.015204871073365211,     0.02256081812083721,
   -0.007945860736072063,   0.009428582154214382,   -0.003165576374158263,
   0.0007117743953131139,  -0.008373155258595943,    0.006578522268682718,
     0.01326432079076767,    0.02995370887219906,   -0.003609190694987774,
    -0.03194940462708473,    0.03550690785050392,    0.017655085772275925,
     0.03506942093372345,   0.024607902392745018,    0.046828530728816986,
    -0.03198009356856346,   -0.03032330609858036,    0.018970536068081856,
     0.00927543081343174,    0.00751956831663847,    0.004081718157976866,
     0.02597544528543949,    0.04809199646115303,  -0.0011864954140037298,
   -0.007367478217929602,     0.0500517338514328,     0.04424132779240608,
   -0.007328629028052092,   0.014350713230669498,    0.004544962663203478,
   -0.004281576257199049,
  ... 924 more items
]

Logo

开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!

更多推荐