The author selected the Open Internet / Free Speech fund to receive a donation as part of the Write for DOnations program.

作者选择了“ 开放式互联网/言论自由”基金来接受捐赠,这是“ 为捐赠写信”计划的一部分。

介绍 (Introduction)

Kubernetes allows users to create resilient and scalable services with a single command. Like anything that sounds too good to be true, it has a catch: you must first prepare a suitable Docker image and thoroughly test it.

Kubernetes允许用户通过单个命令创建弹性和可扩展的服务。 就像听起来有些难以置信的事情一样,它有一个陷阱:您必须首先准备一个合适的Docker映像并对其进行全面测试。

Continuous Integration (CI) is the practice of testing the application on each update. Doing this manually is tedious and error-prone, but a CI platform runs the tests for you, catches errors early, and locates the point at which the errors were introduced. Release and deployment procedures are often complicated, time-consuming, and require a reliable build environment. With Continuous Delivery (CD) you can build and deploy your application on each update without human intervention.

持续集成(CI)是在每次更新时测试应用程序的实践。 手动执行此操作很繁琐且容易出错,但是CI平台会为您运行测试,尽早发现错误并确定引入错误的位置。 发布和部署过程通常很复杂,耗时,并且需要可靠的构建环境。 使用持续交付(CD),您可以在每个更新上构建和部署应用程序,而无需人工干预。

To automate the whole process, you’ll use Semaphore, a Continuous Integration and Delivery (CI/CD) platform.

要使整个过程自动化,您将使用Semaphore (一个持续集成和交付(CI / CD)平台)。

In this tutorial, you’ll build an address book API service with Node.js. The API exposes a simple RESTful API interface to create, delete, and find people in the database. You’ll use Git to push the code to GitHub. Then you’ll use Semaphore to test the application, build a Docker image, and deploy it to a DigitalOcean Kubernetes cluster. For the database, you’ll create a PostgreSQL cluster using DigitalOcean Managed Databases.

在本教程中,您将使用Node.js构建通讯簿API服务。 该API公开了一个简单的RESTful API接口,用于在数据库中创建,删除和查找人员。 您将使用Git将代码推送到GitHub 。 然后,您将使用Semaphore来测试应用程序,构建Docker映像并将其部署到DigitalOcean Kubernetes集群。 对于数据库,您将使用DigitalOcean Managed Databases创建一个PostgreSQL集群。

先决条件 (Prerequisites)

Before reading on, ensure you have the following:

在继续阅读之前,请确保您具有以下条件:

  • A DigitalOcean account and a Personal Access Token. Follow Create a Personal Access Token to set one up for your account.

    DigitalOcean帐户和个人访问令牌。 遵循创建个人访问令牌为您的帐户设置一个。

  • A Docker Hub account.

    Docker Hub帐户。

  • A GitHub account.

    GitHub帐户。

  • A Semaphore account; you can sign up with your GitHub account.

    信号量帐户; 您可以使用您的GitHub帐户进行注册。

  • A new GitHub repository called addressbook for the project. When creating the repository, select the Initialize this repository with a README checkbox and select Node in the Add .gitignore menu. Follow GitHub’s Create a Repo help page for more details.

    该项目的新GitHub存储库称为addressbook 。 创建存储库时,选中“ 使用README初始化此存储库”复选框,然后在“ 添加.gitignore”菜单中选择“ 节点 ”。 请关注GitHub的创建存储库帮助页面以获取更多详细信息。

  • Git installed on your local machine and set up to work with your GitHub account. If you are unfamiliar or need a refresher, consider reading the How to use Git reference guide.

    Git安装在您的本地计算机上,并设置为可与您的GitHub帐户一起使用。 如果您不熟悉或需要复习,请考虑阅读《 如何使用Git参考指南》。

  • curl installed on your local machine.

    curl安装在本地计算机上。

  • Node.js installed on your local machine. In this tutorial, you’ll use Node.js version 10.16.0.

    安装在本地计算机上的Node.js。 在本教程中,您将使用Node.js版本10.16.0

第1步—创建数据库和Kubernetes集群 (Step 1 — Creating the Database and the Kubernetes Cluster)

Start by provisioning the services that will power the application: the DigitalOcean Database Cluster and the DigitalOcean Kubernetes Cluster.

首先,提供将为应用程序提供动力的服务:DigitalOcean数据库集群和DigitalOcean Kubernetes集群。

Log in to your DigitalOcean account and create a project. A project lets you organize all the resources that make up the application. Call the project addressbook.

登录到您的DigitalOcean帐户并创建一个项目 。 通过项目,您可以组织构成应用程序的所有资源。 致电项目addressbook

Next, create a PostgreSQL cluster. The PostgreSQL database service will hold the application’s data. You can pick the latest version available. It should take a few minutes before the service is ready.

接下来,创建一个PostgreSQL集群。 PostgreSQL数据库服务将保存应用程序的数据。 您可以选择可用的最新版本。 服务准备就绪可能需要几分钟。

Once the PostgreSQL service is ready, create a database and a user. Set the database name to addessbook_db and set the username to addressbook_user. Take note of the password that’s generated for your new user. Databases are PostgreSQL’s way of organizing data. Usually, each application has its own database, although there are no hard rules about this. The application will use the username and password to get access to the database so it can save and retrieve its data.

PostgreSQL服务准备就绪后, 创建一个数据库和一个user 。 将数据库名称设置为addessbook_db ,并将用户名设置为addressbook_user 。 记下为您的新用户生成的密码。 数据库是PostgreSQL组织数据的方式。 通常,每个应用程序都有其自己的数据库,尽管对此没有硬性规定。 该应用程序将使用用户名和密码来访问数据库,以便它可以保存和检索其数据。

Finally, create a Kubernetes Cluster. Choose the same region in which the database is running. Name the cluser addressbook-server and set the number of nodes to 3.

最后,创建一个Kubernetes集群。 选择数据库在其中运行的相同区域。 命名cluser addressbook-server ,并将节点数设置为3

While the nodes are provisioning, you can start building your application.

在配置节点时,您可以开始构建应用程序。

第2步-编写应用程序 (Step 2 — Writing the Application)

Let’s build the address book application you’re going to deploy. To start, clone the GitHub repository you created in the prerequisites so you have a local copy of the .gitignore file GitHub created for you, and you’ll be able to commit your application code quickly without having to manually create a repository. Open your browser and go to your new GitHub repository. Click on the Clone or download button and copy the provided URL. Use Git to clone the empty repository to your machine:

让我们构建要部署的地址簿应用程序。 首先,克隆在先决条件中创建的GitHub存储库,以便为您创建.gitignore文件GitHub的本地副本,您将能够快速提交应用程序代码,而无需手动创建存储库。 打开浏览器,然后转到新的GitHub存储库。 单击克隆或下载按钮,然后复制提供的URL。 使用Git将空存储库克隆到您的计算机:

  • git clone https://github.com/your_github_username/addressbook

    git clone https://github.com/ your_github_username / addressbook

Enter the project directory:

输入项目目录:

  • cd addressbook

    cd通讯录

With the repository cloned, you can start writing the app. You’ll build two components: a module that interacts with the database, and a module that provides the HTTP service. The database module will know how to save and retrieve persons from the address book database, and the HTTP module will receive requests and respond accordingly.

复制存储库后,就可以开始编写应用程序了。 您将构建两个组件:与数据库交互的模块和提供HTTP服务的模块。 数据库模块将知道如何从通讯录数据库中保存和检索人员,HTTP模块将接收请求并做出相应的响应。

While not strictly mandatory, it’s good practice to test your code while you write it, so you’ll also create a testing module. This is the planned layout for the application:

尽管不是强制性的,但是在编写代码时测试代码是一种很好的做法,因此您还将创建一个测试模块。 这是应用程序的计划布局:

  • database.js: database module. It handles database operations.

    database.js :数据库模块。 它处理数据库操作。

  • app.js: the end user module and the main application. It provides an HTTP service for the users to connect to.

    app.js :最终用户模块和主应用程序。 它为用户提供连接的HTTP服务。

  • database.test.js: tests for the database module.

    database.test.js :测试数据库模块。

In addition, you’ll want a package.json file for your project, which describes the project and its required dependencies. You can either create it manually with your editor, or interactively using npm. Run the npm init command to create the file interactively:

另外,您将需要项目的package.json文件,该文件描述了项目及其所需的依赖关系。 您可以使用编辑器手动创建它,也可以使用npm交互式创建它。 运行npm init命令以交互方式创建文件:

  • npm init

    npm初始化

The command will ask for some information to get started. Fill in the values as shown in the example. If you don’t see an answer listed, leave the answer blank, which uses the default value in parentheses:

该命令将要求一些信息以开始使用。 填写示例中所示的值。 如果您没有看到答案,请将答案留空,其使用括号中的默认值:


   
   
npm output
package name: (addressbook) addressbook version: (1.0.0) 1.0.0 description: Addressbook API and database entry point: (index.js) app.js test command: git repository: URL for your GitHub repository keywords: author: Sammy the Shark <sammy@example.com>" license: (ISC) About to write to package.json: { "name": "addressbook", "version": "1.0.0", "description": "Addressbook API and database", "main": "app.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC" } Is this OK? (yes) yes

Now you can start writing the code. The database is at the core of the service you’re developing. It’s essential to have a well-designed database model before writing any other components. Consequently, it makes sense to start with the database code.

现在您可以开始编写代码了。 数据库是您正在开发的服务的核心。 在编写任何其他组件之前,必须具有设计良好的数据库模型。 因此,从数据库代码开始是有意义的。

You don’t have to code all the bits of the application; Node.js has a large library of reusable modules. For instance, you don’t have to write any SQL queries if you have the Sequelize ORM module in the project. This module provides an interface that handles databases as JavaScript objects and methods. It can also create tables in your database. Sequelize needs the pg module to work with PostgreSQL.

您不必对应用程序的所有位进行编码。 Node.js具有庞大的可重用模块库。 例如,如果项目中有Sequelize ORM模块,则无需编写任何SQL查询。 该模块提供了一个将数据库作为JavaScript对象和方法处理的接口。 它还可以在数据库中创建表。 Sequelize需要pg模块才能与PostgreSQL一起使用。

Install modules using the npm install command with the --save option, which tells npm to save the module in package.json. Execute this command to install both sequelize and pg:

使用带有--save选项的npm install命令安装模块,该命令告诉npm将模块保存在package.json 。 执行此命令以安装sequelizepg

  • npm install --save sequelize pg

    npm install-保存续集pg

Create a new JavaScript file to hold the database code:

创建一个新JavaScript文件来保存数据库代码:

  • nano database.js

    纳米数据库

Import the sequelize module by adding this line to the file:

通过sequelize添加到文件中,导入sequelize模块:

database.js
database.js
const Sequelize = require('sequelize');

. . .

Then, below that line, initialize a sequelize object with the database connection parameters, which you’ll retrieve from the system environment. This keeps the credentials out of your code so you don’t accidentally share your credentials when you push your code to GitHub. You can use process.env to access environment variables, and JavaScripts’s || operator to set defaults for undefined variables:

然后,在该行下面,使用数据库连接参数初始化一个sequelize对象,您将从系统环境中检索该参数。 这会将凭据保留在代码之外,因此在将代码推送到GitHub时不会意外共享凭据。 您可以使用process.env访问环境变量,以及JavaScript的|| 为未定义变量设置默认值的运算符:

database.js
database.js
. . .

const sequelize = new Sequelize(process.env.DB_SCHEMA || 'postgres',
                                process.env.DB_USER || 'postgres',
                                process.env.DB_PASSWORD || '',
                                {
                                    host: process.env.DB_HOST || 'localhost',
                                    port: process.env.DB_PORT || 5432,
                                    dialect: 'postgres',
                                    dialectOptions: {
                                        ssl: process.env.DB_SSL == "true"
                                    }
                                });

. . .

Now define the Person model. To keep the example from getting too complex, you’ll only create two fields: firstName and lastName, both storing string values. Add the following code to define the model:

现在定义Person模型。 为了避免示例变得过于复杂,您将只创建两个字段: firstNamelastName ,它们都存储字符串值。 添加以下代码以定义模型:

database.js
database.js
. . .

const Person = sequelize.define('Person', {
    firstName: {
        type: Sequelize.STRING,
        allowNull: false
    },
    lastName: {
        type: Sequelize.STRING,
        allowNull: true
    },
});

. . .

This defines the two fields, making firstName mandatory with allowNull: false. Sequelize’s model definition documentation shows the available data types and options.

这定义了两个字段,将firstName强制与allowNull: false 。 Sequelize的模型定义文档显示了可用的数据类型和选项。

Finally, export the sequelize object and the Person model so other modules can use them:

最后,导出sequelize对象和Person模型,以便其他模块可以使用它们:

database.js
database.js
. . .

module.exports = {
    sequelize: sequelize,
    Person: Person
};

It’s handy to have a table-creation script in a separate file that you can call at any time during development. These types of files are called migrations. Create a new file to hold this code:

将表创建脚本放在单独的文件中很方便,您可以在开发过程中随时调用它。 这些类型的文件称为迁移 。 创建一个新文件来保存以下代码:

  • nano migrate.js

    纳米migrate.js

Add these lines to the file to import the database model you defined, and call the sync() function to initialize the database, which creates the table for your model:

将这些行添加到文件中以导入您定义的数据库模型,并调用sync()函数初始化数据库,该数据库将为您的模型创建表:

migrate.js
migration.js
var db = require('./database.js');
db.sequelize.sync();

The application is looking for database connection information in system environment variables. Create a file called .env to hold those values, which you will load into the environment during development:

该应用程序正在系统环境变量中寻找数据库连接信息。 创建一个名为.env的文件来保存这些值,您将在开发过程中将其加载到环境中:

  • nano .env

    纳米.env

Add the following variable declarations to the file. Ensure that you set DB_HOST, DB_PORT, and DB_PASSWORD to those associated with your DigitalOcean PostgreSQL cluster:

将以下变量声明添加到文件中。 确保将DB_HOSTDB_PORTDB_PASSWORD为与DigitalOcean PostgreSQL集群关联的那些:

.env
.env
export DB_SCHEMA=addressbook_db
export DB_USER=addressbook_user
export DB_PASSWORD=your_db_user_password
export DB_HOST=your_db_cluster_host
export DB_PORT=your_db_cluster_port
export DB_SSL=true
export PORT=3000

Save the file.

保存文件。

Warning: never check environment files into source control. They usually have sensitive information.

警告 :切勿将环境文件检入源代码管理中。 他们通常拥有敏感信息。

Since you defined a default .gitignore file when you created the repository, this file is already ignored.

由于在创建存储库时定义了默认的.gitignore文件,因此该文件已被忽略。

You are ready to initialize the database. Import the environment file and run migrate.js:

您已准备好初始化数据库。 导入环境文件并运行migrate.js

  • source ./.env

    源./.env
  • node migrate.js

    节点migration.js

This creates the database table:

这将创建数据库表:


   
   
Output
Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname;

The output shows two commands. The first one creates the People table as per your definition. The second command checks that the table was indeed created by looking it up in the PostgreSQL catalog.

输出显示两个命令。 第一个根据您的定义创建People表。 第二条命令通过在PostgreSQL目录中查找表来检查是否确实创建了该表。

It’s good practice to create tests for your code. With tests, you can validate the code’s behavior. You can write a check for each function, method, or any other part of your system and verify that it works the way you’d expect, without having to test things manually.

为代码创建测试是一个好习惯。 通过测试,您可以验证代码的行为。 您可以为系统的每个功能,方法或任何其他部分编写检查,并验证它是否可以按您期望的方式工作,而无需手动测试。

The jest testing framework is a great fit for writing tests against Node.js applications. Jest scans the files in the project for test files and executes them one a time. Install Jest with the --save-dev option, which tells npm that the module is not required to run the program, but it is a dependency for developing the application:

开玩笑的测试框架非常适合针对Node.js应用程序编写测试。 Jest扫描项目中的文件以查找测试文件,并一次执行一次。 使用--save-dev选项安装Jest,它告诉npm该模块不是运行程序所必需的,但这是开发应用程序的依赖项:

  • npm install --save-dev jest

    npm install --save-dev开玩笑

You’ll write tests to verify that you can insert, read, and delete records from your database. These tests will verify that your database connection and permissions are configured properly, and will also provide some tests you can use in your CI/CD pipeline later.

您将编写测试以验证您可以从数据库中插入,读取和删除记录。 这些测试将验证您的数据库连接和权限是否配置正确,还将提供一些测试,供您以后在CI / CD管道中使用。

Create the database.test.js file:

创建database.test.js文件:

  • nano database.test.js

    纳米数据库.test.js

Add the following content. Start by importing the database code:

添加以下内容。 首先导入数据库代码:

database.test.js
database.test.js
const db = require('./database');

. . .

To ensure the database is ready to use, call sync() inside the beforeAll function:

为了确保数据库可以使用,请在beforeAll函数内部调用sync()

database.test.js
database.test.js
. . .

beforeAll(async () => {
    await db.sequelize.sync();
});

. . .

The first test creates a person record in the database. The sequelize library executes all queries asynchronously, which means it doesn’t wait for the results of the query. To make the test wait for results so you can verify them, you must use the async and await keywords. This test calls the create() method to insert a new row in the database. Use expect to compare the person.id column with 1. The test will fail if you get a different value:

第一个测试在数据库中创建一个人记录。 sequelize库异步执行所有查询,这意味着它不会等待查询结果。 要使测试等待结果以便可以验证结果,必须使用asyncawait关键字。 此测试调用create()方法在数据库中插入新行。 使用expect到比较person.id1 。 如果获得其他值,则测试将失败:

database.test.js
database.test.js
. . .

test('create person', async () => {
    expect.assertions(1);
    const person = await db.Person.create({
        id: 1,
        firstName: 'Sammy',
        lastName: 'Davis Jr.',
        email: 'sammy@example.com'
    });
    expect(person.id).toEqual(1);
});

. . .

In the next test, use the findByPk() method to retrieve the row with id=1. Then, validate the firstName and lastName values. Once again, use async and await:

在下一个测试中,使用findByPk()方法检索id=1的行。 然后,验证firstNamelastName值。 再次使用asyncawait

database.test.js
database.test.js
. . .

test('get person', async () => {
    expect.assertions(2);
    const person = await db.Person.findByPk(1);
    expect(person.firstName).toEqual('Sammy');
    expect(person.lastName).toEqual('Davis Jr.');
});

. . .

Finally, test removing a person from the database. The destroy() method deletes the person with id=1. To ensure that it worked, try retrieving the person a second time and checking that the returned value is null:

最后,测试从数据库中删除一个人。 destroy()方法删除id=1 。 为了确保它能正常工作,请尝试再次检索此人,并检查返回的值是否为null

database.test.js
database.test.js
. . .

test('delete person', async () => {
    expect.assertions(1);
    await db.Person.destroy({
        where: {
            id: 1
        }
    });
    const person = await db.Person.findByPk(1);
    expect(person).toBeNull();
});

. . .

Finally, add this code to close the connection to the database with close() once all tests have finished:

最后,添加以下代码以在所有测试完成后使用close()与数据库的连接:

app.js
app.js
. . .

afterAll(async () => {
    await db.sequelize.close();
});

Save the file.

保存文件。

The jest command runs the test suite for your program, but you can also store commands in package.json. Open this file in your editor:

jest命令为您的程序运行测试套件,但是您也可以将命令存储在package.json 。 在编辑器中打开此文件:

  • nano package.json

    纳米package.json

Locate the scripts keyword and replace the existing test line (which was just a placeholder). The test command is jest:

找到scripts关键字,并替换现有的test行(这只是一个占位符)。 测试命令是jest

. . .

  "scripts": {
    "test": "jest"
  },

. . .

Now you can call npm run test to invoke the test suite. This may be a longer command, but if you need to modify the jest command later, external services won’t have to change; they can continue calling npm run test.

现在,您可以调用npm run test来调用测试套件。 这可能是一个较长的命令,但是如果您以后需要修改jest命令,则不必更改外部服务。 他们可以继续致电npm run test

Run the tests:

运行测试:

  • npm run test

    npm运行测试

Then, check the results:

然后,检查结果:


   
   
Output
console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): INSERT INTO "People" ("id","firstName","lastName","createdAt","updatedAt") VALUES ($1,$2,$3,$4,$5) RETURNING *; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): DELETE FROM "People" WHERE "id" = 1 console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; PASS ./database.test.js ✓ create person (344ms) ✓ get person (173ms) ✓ delete person (323ms) Test Suites: 1 passed, 1 total Tests: 3 passed, 3 total Snapshots: 0 total Time: 5.315s Ran all test suites.

With the database code tested, you can build the API service to manage the people in the address book.

通过测试数据库代码,您可以构建API服务来管理地址簿中的人员。

To serve HTTP requests, you’ll use the Express web framework. Install Express and save it as a dependency using npm install:

要处理HTTP请求,您将使用Express Web框架。 安装Express并使用npm install将其另存为依赖项:

  • npm install --save express

    npm install-保存快递

You’ll also need the body-parser module, which you’ll use to access the HTTP request body. Install this as a dependency as well:

您还将需要body-parser模块,该模块将用于访问HTTP请求正文。 也将其安装为依赖项:

  • npm install --save body-parser

    npm install-保存正文解析器

Create the main application file app.js:

创建主应用程序文件app.js

  • nano app.js

    纳米app.js

Import the express, body-parser, and database modules. Then create an instance of the express module called app to control and configure the service. You use app.use() to add features such as middleware. Use this to add the body-parser module so the application can read url-encoded strings:

导入expressbody-parserdatabase模块。 然后创建一个名为appexpress模块实例,以控制和配置服务。 您可以使用app.use()添加诸如中间件的功能。 使用它来添加body-parser模块,以便应用程序可以读取url编码的字符串:

app.js
app.js
var express = require('express');
var bodyParser = require('body-parser');
var db = require('./database');
var app = express();
app.use(bodyParser.urlencoded({ extended: true }));

. . .

Next, add routes to the application. Routes are similar to buttons in an app or website; they trigger some action in your application. Routes link unique URLs to actions in the application. Each route will serve a specific path and support a different operation.

接下来,将路由添加到应用程序。 路线类似于应用或网站中的按钮; 它们会触发您的应用程序中的某些操作。 路由将唯一URL链接到应用程序中的操作。 每条路线将服务于一条特定路径并支持不同的操作。

The first route you’ll define handles GET requests for the /person/$ID path, which will display the database record for the person with the specified ID. Express automatically sets the value of the requested $ID in the req.params.id variable.

您将定义的第一个路由处理/person/$ID路径的GET请求,该请求将显示具有指定ID的人的数据库记录。 Express会在req.params.id变量中自动设置请求的$ID的值。

The application must reply with the person data encoded as a JSON string. As you did in the database tests, use the findByPk() method to retrieve the person by id and reply to the request with HTTP status 200 (OK) and send the person record as JSON. Add the following code:

应用程序必须使用编码为JSON字符串的人员数据进行回复。 正如您在数据库测试中所做的那样,使用findByPk()方法通过id来检索人员,并以HTTP状态200 (确定)回复请求,并将人员记录作为JSON发送。 添加以下代码:

app.js
app.js
. . .

app.get("/person/:id", function(req, res) {
    db.Person.findByPk(req.params.id)
        .then( person => {
            res.status(200).send(JSON.stringify(person));
        })
        .catch( err => {
            res.status(500).send(JSON.stringify(err));
        });
});

. . .

Errors cause the code in catch() to be executed. For instance, if the database is down, the connection will fail, and this will execute instead. In case of trouble, set the HTTP status to 500 (Internal Server Error) and send the error message back to the user:

错误导致catch()的代码被执行。 例如,如果数据库关闭,连接将失败,而将执行连接。 如果出现问题,请将HTTP状态设置为500 (内部服务器错误),然后将错误消息发送回用户:

Add another route to create a person in the database. This route will handle PUT requests and access the person’s data from the req.body. Use the create() method to insert a row in the database:

添加另一条路线以在数据库中创建一个人。 此路由将处理PUT请求并从req.body访问人员的数据。 使用create()方法在数据库中插入一行:

app.js
app.js
. . .

app.put("/person", function(req, res) {
    db.Person.create({
        firstName: req.body.firstName,
        lastName: req.body.lastName,
        id: req.body.id
    })
        .then( person => {
            res.status(200).send(JSON.stringify(person));
        })
        .catch( err => {
            res.status(500).send(JSON.stringify(err));
        });
});

. . .

Add another route to handle DELETE requests, which will remove records from the address book. First, use the ID to locate the record and then use the destroy method to remove it:

添加另一个路由来处理DELETE请求,这将从通讯簿中删除记录。 首先,使用ID定位记录,然后使用destroy方法将其删除:

app.js
app.js
. . .

app.delete("/person/:id", function(req, res) {
    db.Person.destroy({
        where: {
            id: req.params.id
        }
    })
        .then( () => {
            res.status(200).send();
        })
        .catch( err => {
            res.status(500).send(JSON.stringify(err));
        });
});

. . .

And for convenience, add a route that retrieves all people in the database using the /all path:

为了方便起见,添加一条使用/all路径检索数据库中/all路由:

app.js
app.js
. . .

app.get("/all", function(req, res) {
    db.Person.findAll()
        .then( persons => {
            res.status(200).send(JSON.stringify(persons));
        })
        .catch( err => {
            res.status(500).send(JSON.stringify(err));
        });
});

. . .

One last route left. If the request did not match any of the previous routes, send status code 404 (Not Found):

最后一条路线。 如果请求与之前的任何路由都不匹配,请发送状态代码404 (未找到):

app.js
app.js
. . .

app.use(function(req, res) {
    res.status(404).send("404 - Not Found");
});

. . .

Finally, add the listen() method, which starts up the service. If the environment variable PORT is defined, then the service listens in that port; otherwise, it defaults to port 3000:

最后,添加listen()方法,以启动服务。 如果定义了环境变量PORT ,则服务将在该端口进行侦听;否则,服务将在该端口侦听。 否则,默认为端口3000

app.js
app.js
. . .

var server = app.listen(process.env.PORT || 3000, function() {
    console.log("app is running on port", server.address().port);
});

As you’ve learned, the package.json file lets you define various commands to run tests, start your apps, and other tasks, which often lets you run common commands with much less typing. Add a new command on package.json to start the application. Edit the file:

如您package.jsonpackage.json文件使您可以定义各种命令来运行测试,启动应用程序和执行其他任务,而这些命令通常使您可以以更少的键入来运行常见命令。 在package.json上添加新命令以启动应用程序。 编辑文件:

  • nano package.json

    纳米package.json

Add the start command, so it looks like this:

添加start命令,因此如下所示:

package.json
package.json
. . .

  "scripts": {
    "test": "jest",
    "start": "node app.js"
  },

. . .

Don’t forget to add a comma to the previous line, as the scripts section needs its entries separated by commas.

不要忘记在上一行添加逗号,因为scripts部分需要用逗号分隔其条目。

Save the file and start the application for the first time. First, load the environment file with source; this imports the variables into the session and makes them available to the application. Then, start the application with npm run start:

保存文件并首次启动应用程序。 首先,用source加载环境文件; 这会将变量导入会话,并使它们可用于应用程序。 然后,使用npm run start应用程序:

  • source ./.env

    源./.env
  • npm run start

    npm运行开始

The app starts on port 3000:

该应用程序从端口3000启动:


   
   
Output
app is running on port 3000

Open a browser and navigate to http://localhost:3000/all. You’ll see a page showing [].

打开浏览器并导航到http://localhost:3000/all 。 您会看到显示[]的页面。

Switch back to your terminal and press CTRL-C to stop the application.

切换回您的终端,然后按CTRL-C停止应用程序。

Now is an excellent time to add code quality tests. Code quality tools, also known as linters, scan the project for issues in the code. Bad coding practices like leaving unused variables, not ending statements with a semicolon, or missing curly braces can cause bugs that are difficult to find.

现在是添加代码质量测试的绝佳时机。 代码质量工具(也称为linters)会在项目中扫描代码中的问题。 错误的编码做法,例如留下未使用的变量,不使用分号结束语句或缺少花括号,都可能导致难以发现的错误。

Install jshint tool, a JavaScript linter, as a development dependency:

安装jshint工具(JavaScript linter)作为开发依赖项:

  • npm install --save-dev jshint

    npm install --save-dev jshint

Over the years, JavaScript has received of updates, features, and syntax changes. The language has been standardized by ECMA International under the name of “ECMAScript”. About once a year, ECMA releases a new version of ECMAScript with new features.

多年来,JavaScript已收到更新,功能和语法更改。 该语言已由ECMA International标准化,名称为“ ECMAScript”。 ECMA大约每年一次发布具有新功能的新版本ECMAScript。

By default, jshint assumes that your code is compatible with ES6 (ECMAScript Version 6), and will throw an error if it finds any keywords not supported in that version. You’ll want to find the version that is compatible with your code. If you look at the feature table for all the recent versions, you’ll find that the async/await keywords were not introduced until ES8. You used both keywords in the database test code, so that sets the minimum compatible version to ES8.

默认情况下, jshint假定您的代码与ES6(ECMAScript版本6)兼容,并且如果发现该版本不支持的任何关键字,则会抛出错误。 您将要查找与您的代码兼容的版本。 如果查看所有最新版本的功能表 ,将会发现直到ES8才引入了async/await关键字。 您在数据库测试代码中使用了两个关键字,因此将最低兼容版本设置为ES8。

To tell jshint the version you’re using, create a file called .jshintrc:

要告诉jshint您正在使用的版本,请创建一个名为.jshintrc的文件:

  • nano .jshintrc

    纳米.jshintrc

In the file, specify esversion. The jshintrc file uses JSON, so create a new JSON object in the file:

在文件中,指定esversionjshintrc文件使用JSON,因此在文件中创建一个新的JSON对象:

.jshintrc
.jshintrc
{ "esversion": 8 }

Save the file and exit the editor.

保存文件并退出编辑器。

Add a command to run jshint. Edit package.json:

添加命令以运行jshint 。 编辑package.json

  • nano package.json

    纳米package.json

Add a lint command to your project in the scripts section of package.json. The command calls the lint tool against all the JavaScript files you created so far:

package.jsonscripts部分将lint命令添加到您的项目中。 该命令针对到目前为止创建的所有JavaScript文件调用lint工具:

package.json
package.json
. . .

  "scripts": {
    "test": "jest",
    "start": "node app.js",
    "lint": "jshint app.js database*.js migrate.js"
  },

. . .

Now you can run the linter to find any issues:

现在,您可以运行lint来查找任何问题:

  • npm run lint

    npm运行皮棉

There should not be any error messages:

不应有任何错误消息:


   
   
Output
> jshint app.js database*.js migrate.js

If there are any errors, jshint will show the line that has the problem.

如果有任何错误, jshint将显示有问题的行。

You’ve completed the project and ensured it works. Add the files to the repository, commit, and push the changes:

您已经完成了该项目,并确保它可以工作。 将文件添加到存储库,提交并推送更改:

  • git add *.js

    git添加* .js
  • git add package*.json

    git添加包* .json
  • git add .jshintrc

    git添加.jshintrc
  • git commit -m 'initial commit'

    git commit -m'初始提交'
  • git push origin master

    git push origin master

Now you can configure Semaphore to test, build, and deploy the application, starting by configuring Semaphore with your DigitalOcean Personal Access Token and database credentials.

现在,您可以配置Semaphore以测试,构建和部署应用程序,首先使用DigitalOcean Personal Access Token和数据库凭据配置Semaphore。

第3步-在信号量中创建秘密 (Step 3 — Creating Secrets in Semaphore)

There is some information that doesn’t belong in a GitHub repository. Passwords and API Tokens are good examples of this. You’ve stored this sensitive data in a separate file and loaded it into your environment, When using Semaphore, you can use Secrets to store sensitive data.

GitHub存储库中有一些不属于您的信息。 密码和API令牌就是很好的例子。 您已将此敏感数据存储在单独的文件中,并将其加载到您的环境中。使用信号量时,可以使用Secrets存储敏感数据。

There are three kinds of secrets in the project:

该项目包含三种秘密:

  • Docker Hub: the username and password of your Docker Hub account.

    Docker Hub:您的Docker Hub帐户的用户名和密码。
  • DigitalOcean Personal Access Token: to deploy the application to your Kubernetes cluster.

    DigitalOcean个人访问令牌:将应用程序部署到您的Kubernetes集群。
  • Environment Variables: for database username and password connection parameters.

    环境变量:用于数据库用户名和密码连接参数。

To create the first secret, open your browser and log in to the Semaphore website. On the left navigation menu, click Secrets under the CONFIGURATION heading. Click the Create New Secret button.

要创建第一个机密,请打开浏览器并登录Semaphore网站。 在左侧导航菜单上,单击 配置”标题下的“ 秘密 ”。 单击创建新机密按钮。

For Name of the Secret, enter dockerhub. Then under Environment Variables, create two environment variables:

对于“秘密名称” ,输入dockerhub 。 然后在“ 环境变量”下 ,创建两个环境变量:

  • DOCKER_USERNAME: your DockerHub username.

    DOCKER_USERNAME :您的DockerHub用户名。

  • DOCKER_PASSWORD: your DockerHub password.

    DOCKER_PASSWORD :您的DockerHub密码。

Click Save Changes.

点击保存更改

Create a second secret for your DigitalOcean Personal Access Token. Once again, click on Secrets on the left navigation menu, then on Create New Secret. Call this secret do-access-token and create an environment value called DO_ACCESS_TOKEN with the value set to your Personal Access Token:

为您的DigitalOcean个人访问令牌创建第二个秘密。 再次单击左侧导航菜单上的Secrets ,然后单击Create New Secret 。 调用此秘密的do-access-token并创建一个名为DO_ACCESS_TOKEN的环境值, DO_ACCESS_TOKEN其值设置为您的Personal Access Token:

Save the secret.

保存秘密。

For the next secret, instead of setting environment variables directly, you’ll upload the .env file from the project’s root.

对于下一个秘密,您将直接从项目的根目录上载.env文件,而不是直接设置环境变量。

Create a new secret called env-production. Under the Files section, press the Upload file link to locate and upload your .env file, and tell Semaphore to place it at /home/semaphore/env-production.

创建一个新的秘密,称为env-production 。 在“ 文件”部分下,按“ 上传文件”链接找到并上传您的.env文件,并告诉Semaphore将其放置在/home/semaphore/env-production

Note: Because the file is hidden, you may have trouble finding it on your computer. There is usually a menu item or a key combination to view hidden files, such as CTRL+H. If all else fails, you can try copying the file with a non-hidden name:

注意:由于文件是隐藏的,因此您可能无法在计算机上找到它。 通常,有一个菜单项或一个组合键可以查看隐藏文件,例如CTRL+H 如果所有其他方法均失败,则可以尝试使用非隐藏名称复制文件:

  • cp .env env

    cp .env env

Then upload the file and rename it back:

然后上传文件并将其重命名为:

  • cp env .env

    cp env .env

The environment variables are all configured. Now you can begin the Continuous Integration setup.

环境变量均已配置。 现在,您可以开始进行持续集成设置。

步骤4 —将您的项目添加到信号量 (Step 4 — Adding your Project to Semaphore)

In this step you will add your project to Semaphore and start the Continuous Integration (CI) pipeline.

在这一步中,您将把项目添加到Semaphore并启动持续集成(CI)管道

First, link your GitHub repository with Semaphore:

首先,将您的GitHub存储库与Semaphore链接:

  1. Log in to your Semaphore account.

    登录到您的Semaphore帐户。

  2. Click the + icon next to PROJECTS.

    点击“ 项目”旁边的+图标。

  3. Click the Add Repository button next to your repository.

    单击存储库旁边的添加存储库按钮。

Now that Semaphore is connected, it will pick up any changes in the repository automatically.

现在,信号量已连接,它将自动获取存储库中的所有更改。

You are now ready to create the Continuous Integration pipeline for the application. A pipeline defines the path your code must travel to get built, tested, and deployed. The pipeline is automatically run each time there is a change in the GitHub repository.

现在,您可以为应用程序创建持续集成管道了。 管道定义了代码构建,测试和部署所必须经过的路径。 每当GitHub存储库中有更改时,管道都会自动运行。

First, you should ensure that Semaphore uses the same version of Node you’ve been using during development. You can check which version is running on your machine:

首先,您应确保Semaphore使用与开发过程中相同的Node版本。 您可以检查计算机上正在运行的版本:

  • node -v

    节点-v

   
   
Output
v10.16.0

You can tell Semaphore which version of Node.js to use by creating a file called .nvmrc in your repository. Internally, Semaphore uses node version manager to switch between Node.js versions. Create the .nvmrc file and set the version to 10.16.0:

您可以通过在存储库中创建一个名为.nvmrc的文件来告诉Semaphore使用哪个版本的.nvmrc 在内部,Semaphore使用节点版本管理器在Node.js版本之间切换。 创建.nvmrc文件,并将版本设置为10.16.0

  • echo '10.16.0' > .nvmrc

    回声'10 .16.0'> .nvmrc

Semaphore pipelines go in the .semaphore directory. Create the directory:

信号量管道位于.semaphore目录中。 创建目录:

  • mkdir .semaphore

    mkdir .semaphore

Create a new pipeline file. The initial pipeline is always called semaphore.yml. In this file, you’ll define all the steps required to build and test the application.

创建一个新的管道文件。 初始管道始终称为semaphore.yml 。 在此文件中,您将定义构建和测试应用程序所需的所有步骤。

  • nano .semaphore/semaphore.yml

    纳米.semaphore / semaphore.yml

Note: You are creating a file in the YAML format. You must preserve the leading spaces as shown in the tutorial.

注意 :您正在以YAML格式创建文件。 您必须按照教程中的说明保留前导空格。

The first line must set the Semaphore file version; the current stable is v1.0. Also, the pipeline needs a name. Add these lines to your file:

第一行必须设置信号灯文件版本; 当前的稳定版本是v1.0 。 另外,管道还需要一个名称。 将这些行添加到您的文件中:

.semaphore/semaphore.yml
.semaphore / semaphore.yml
version: v1.0
name: Addressbook

. . .

Semaphore automatically provisions virtual machines to run the tasks. There are various machines to choose from. For the integration jobs, use the e1-standard-2 (2 CPUs 4 GB RAM) along with an Ubuntu 18.04 OS. Add these lines to the file:

信号量会自动配置虚拟机以运行任务。 有多种机器可供选择 。 对于集成作业,请使用e1-standard-2 (2个CPU 4 GB RAM)和Ubuntu 18.04 OS。 将以下行添加到文件中:

.semaphore/semaphore.yml
.semaphore / semaphore.yml
. . .

agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804

. . .

Semaphore uses blocks to organize the tasks. Each block can have one or more jobs. All jobs in a block run in parallel, each one in an isolated machine. Semaphore waits for all jobs in a block to pass before starting the next one.

信号量使用来组织任务。 每个块可以具有一个或多个作业 。 块中的所有作业并行运行,每个作业在一台隔离的计算机中。 信号量等待块中的所有作业通过,然后再开始下一个作业。

Start by defining the first block, which installs all the JavaScript dependencies to test and run the application:

首先定义第一个块,该块将安装所有JavaScript依赖项以测试和运行该应用程序:

.semaphore/semaphore.yml
.semaphore / semaphore.yml
. . .

blocks:
  - name: Install dependencies
    task:

. . .

You can define environment variables that are common for all jobs, like setting NODE_ENV to test, so Node.js knows this is a test environment. Add this code after task:

您可以定义所有作业通用的环境变量,例如将NODE_ENV设置为test ,因此Node.js知道这是一个测试环境。 在task之后添加以下代码:

.semaphore/semaphore.yml
.semaphore / semaphore.yml
. . .
    task:
      env_vars:
        - name: NODE_ENV
          value: test

. . .

Commands in the prologue section are executed before each job in the block. It’s a convenient place to define setup tasks. You can use checkout to clone the GitHub repository. Then, nvm use activates the appropriate Node.js version you specified in .nvmrc. Add the prologue section:

序言部分中的命令在该块中的每个作业之前执行。 这是定义设置任务的便利位置。 您可以使用checkout克隆GitHub存储库。 然后, nvm use激活您在.nvmrc指定的相应Node.js版本。 添加prologue部分:

.semaphore/semaphore.yml
.semaphore / semaphore.yml
task:
. . .

      prologue:
        commands:
          - checkout
          - nvm use

. . .

Next add this code to install the project’s dependencies. To speed up jobs, Semaphore provides the cache tool. You can run cache store to save node_modules directory in Semaphore’s cache. cache automatically figures out which files and directories should be stored. The second time the job is executed, cache restore restores the directory.

接下来添加此代码以安装项目的依赖项。 为了加快作业速度,Semaphore提供了缓存工具。 您可以运行cache store以将node_modules目录保存在Semaphore的缓存中。 cache自动确定应存储哪些文件和目录。 第二次执行作业时, cache restore将还原目录。

.semaphore/semaphore.yml
.semaphore / semaphore.yml
. . .

      jobs:
        - name: npm install and cache
          commands:
            - cache restore
            - npm install
            - cache store 

. . .

Add another block which will run two jobs. One to run the lint test, and another to run the application’s test suite.

添加另一个块,它将运行两个作业。 一个运行lint测试,另一个运行应用程序的测试套件。

.semaphore/semaphore.yml
.semaphore / semaphore.yml
. . .

  - name: Tests
    task:
      env_vars:
        - name: NODE_ENV
          value: test
      prologue:
        commands:
          - checkout
          - nvm use
          - cache restore 

. . .

The prologue repeats the same commands as in the previous block and restores node_module from the cache. Since this block will run tests, you set the NODE_ENV environment variable to test.

prologue重复与上一个块中相同的命令,并从高速缓存中恢复node_module 。 由于此块将运行测试,因此将NODE_ENV环境变量设置为test

Now add the jobs. The first job performs the code quality check with jshint:

现在添加作业。 第一项工作使用jshint执行代码质量检查:

.semaphore/semaphore.yml
.semaphore / semaphore.yml
. . .

      jobs:
        - name: Static test
          commands:
            - npm run lint

. . .

The next job executes the unit tests. You’ll need a database to run them, as you don’t want to use your production database. Semaphore’s sem-service can start a local PostgreSQL database in the test environment that is completely isolated. The database is destroyed when the job ends. Start this service and run the tests:

下一个作业执行单元测试。 您将需要一个数据库来运行它们,因为您不想使用生产数据库。 Semaphore的sem-service可以在完全隔离的测试环境中启动本地PostgreSQL数据库。 作业结束时数据库被破坏。 启动此服务并运行测试:

.semaphore/semaphore.yml
.semaphore / semaphore.yml
. . .

        - name: Unit test
          commands:
            - sem-service start postgres
            - npm run test

Save the .semaphore/semaphore.yml file.

保存.semaphore/semaphore.yml文件。

Now add and commit the changes to the GitHub repository:

现在将更改添加并提交到GitHub存储库:

  • git add .nvmrc

    git添加.nvmrc
  • git add .semaphore/semaphore.yml

    git添加.semaphore / semaphore.yml
  • git commit -m "continuous integration pipeline"

    git commit -m“连续集成管道”
  • git push origin master

    git push origin master

As soon as the code is pushed to GitHub, Semaphore starts the CI pipeline:

一旦将代码推送到GitHub,Semaphore就会启动CI管道:

You can click on the pipeline to show the blocks and jobs, and their output.

您可以单击管道以显示块和作业及其输出。

Next you will create a new pipeline that builds a Docker image for the application.

接下来,您将创建一个新管道,该管道将为应用程序构建一个Docker映像。

步骤5 —为应用程序构建Docker映像 (Step 5 — Building Docker Images for the Application)

A Docker image is the basic unit of a Kubernetes deployment. The image should have all the binaries, libraries, and code required to run the application. A Docker container is not a lightweight virtual machine, but it behaves like one. The Docker Hub registry contains hundreds of ready-to-use images, but we’re going to build our own.

Docker映像是Kubernetes部署的基本单元。 该映像应具有运行该应用程序所需的所有二进制文件,库和代码。 Docker容器不是轻量级的虚拟机,但其行为类似于一个。 Docker Hub注册表包含数百个即用型映像,但我们将构建自己的映像。

In this step, you’ll add a new pipeline to build a custom Docker image for your app and push it to Docker Hub.

在此步骤中,您将添加一个新管道以为您的应用构建自定义Docker映像并将其推送到Docker Hub。

To build a custom image, create a Dockerfile:

要构建自定义映像,请创建一个Dockerfile

  • nano Dockerfile

    纳米Dockerfile

The Dockerfile is a recipe to create the image. You can use the official Node.js distribution as a starting point instead of starting from scratch. Add this to your Dockerfile:

Dockerfile是创建映像的配方。 您可以使用官方的 Node.js发行版作为起点,而不是从头开始。 将此添加到您的Dockerfile

Dockerfile
Docker文件
FROM node:10.16.0-alpine

. . .

Then add a command which copies package.json and package-lock.json, and then install the node modules inside the image:

然后添加一个复制package.jsonpackage-lock.json的命令,然后在映像内安装节点模块:

Dockerfile
Docker文件
. . .

COPY package*.json ./
RUN npm install

. . .

Installing the dependencies first will speed up subsequent builds, as Docker will cache this step.

首先安装依赖项将加快后续构建,因为Docker将缓存此步骤。

Now add this command which copies all the application files in the project root into the image:

现在添加此命令,该命令会将项目根目录中的所有应用程序文件复制到映像中:

Dockerfile
Docker文件
. . .

COPY *.js ./

. . .

Finally, EXPOSE specifies that the container listens for connections on port 3000, where the application is listening, and CMD sets the command that should run when the container starts. Add these lines to your file:

最后, EXPOSE指定容器在应用程序正在侦听的端口3000上侦听连接,而CMD设置应在容器启动时运行的命令。 将这些行添加到您的文件中:

Dockerfile
Docker文件
. . .

EXPOSE 3000
CMD [ "npm", "run", "start" ]

Save the file.

保存文件。

With the Dockerfile complete, you can create a new pipeline so Semaphore can build the image for you when you push your code to GitHub. Create a new file called docker-build.yml:

完成Dockerfile后,您可以创建一个新管道,以便在将代码推送到GitHub时Semaphore可以为您构建映像。 创建一个名为docker-build.yml的新文件:

  • nano .semaphore/docker-build.yml

    纳米.semaphore / docker-build.yml

Start the pipeline with the same boilerplate as the the CI pipline, but with the name Docker build:

使用与CI pipline相同的样板启动管道,但使用Docker build名称:

.semaphore/docker-build.yml
.semaphore / docker-build.yml
version: v1.0
name: Docker build
agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804

. . .

This pipeline will have only one block and one job. In Step 3, you created a secret named dockerhub with your Docker Hub username and password. Here, you’ll import these values using the secrets keyword. Add this code:

该管道将​​只有一个块和一个作业。 在步骤3中,您使用Docker Hub用户名和密码创建了一个名为dockerhub的秘密。 在这里,您将使用secrets关键字导入这些值。 添加此代码:

.semaphore/docker-build.yml
.semaphore / docker-build.yml
. . .

blocks:
  - name: Build
    task:
      secrets:
        - name: dockerhub

. . .

Docker images are stored in repositories. We’ll use the official Docker Hub which allows for an unlimited number of public images. Add these lines to check out the code from GitHub and use the docker login command to authenticate with Docker Hub.

Docker映像存储在存储库中。 我们将使用官方的Docker Hub ,该中心允许无限数量的公共映像。 添加这些行以从GitHub检出代码,并使用docker login命令对Docker Hub进行身份验证。

.semaphore/docker-build.yml
.semaphore / docker-build.yml
task:
. . .

      prologue:
        commands:
          - checkout
          - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin

. . .

Each Docker image is fully identified by the combination of name and tag. The name usually corresponds to the product or software, and the tag corresponds to the particular version of the software. For example, node.10.16.0. When no tag is supplied, Docker defaults to the special latest tag. Hence, it’s considered good practice to use the latest tag to refer to the most current image.

每个Docker映像都通过名称和标签的组合完全标识。 名称通常与产品或软件相对应,而标签则与软件的特定版本相对应。 例如, node.10.16.0 。 如果未提供标签,则Docker默认为特殊的latest标签。 因此,使用latest标签来引用最新图像被认为是一种很好的做法。

Add the following code to build the image and push it to Docker Hub:

添加以下代码以构建映像并将其推送到Docker Hub:

.semaphore/docker-build.yml
.semaphore / docker-build.yml
. . .

      jobs:
      - name: Docker build
        commands:
          - docker pull "${DOCKER_USERNAME}/addressbook:latest" || true
          - docker build --cache-from "${DOCKER_USERNAME}/addressbook:latest" -t "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" .
          - docker push "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID"

When Docker builds the image, it reuses parts of existing images to speed up the process. The first command tries to pull the latest image from Docker Hub so it may be reused. Semaphore stops the pipeline if any of the commands return a status code different than zero. For example, if the repository doesn’t have any latest image, as it won’t on the first try, the pipeline will stop. You can force Semaphore to ignore failed commands by appending || true to the command.

Docker构建映像时,它会重用现有映像的一部分以加快流程。 第一个命令尝试从Docker Hub中获取latest映像,以便可以重用它。 如果任何命令返回的状态码都不为零,则信号量将停止管道。 例如,如果存储库没有任何latest映像,因为它不会在第一次尝试中出现,则管道将停止。 您可以通过附加|| true来强制信号量忽略失败的命令。 || true的命令。

The second command builds the image. To reference this particular image later, you can tag it with a unique string. Semaphore provides several environment variables for jobs. One of them, $SEMAPHORE_WORKFLOW_ID is unique and shared among all the pipelines in the workflow. It’s handy for referencing this image later in the deployment.

第二个命令生成映像。 要在以后引用该特定图像,可以使用唯一的字符串对其进行标记。 信号为作业提供了几个环境变量 。 其中之一, $SEMAPHORE_WORKFLOW_ID是唯一的,并且在工作流中的所有管道之间共享。 在稍后的部署中参考此映像很方便。

The third command pushes the image to Docker Hub.

第三个命令将映像推送到Docker Hub。

The build pipeline is ready, but Semaphore will not start it unless you connect it to the main CI pipeline. You can chain multiple pipelines to create complex, multi-branch workflows using promotions.

构建管道已准备就绪,但是Semaphore不会启动它,除非您将其连接到主CI管道。 您可以使用促销将多个管道链接在一起以创建复杂的多分支工作流。

Edit the main pipeline file .semaphore/semaphore.yml:

编辑.semaphore/semaphore.yml道文件.semaphore/semaphore.yml

  • nano .semaphore/semaphore.yml

    纳米.semaphore / semaphore.yml

Add the following lines at the end of the file:

在文件末尾添加以下行:

.semaphore/semaphore.yml
.semaphore / semaphore.yml
. . .

promotions:
  - name: Dockerize
    pipeline_file: docker-build.yml
    auto_promote_on:
      - result: passed

auto_promote_on defines the condition to start the docker build pipeline. In this case, it runs when all jobs defined in the semaphore.yml file have passed.

auto_promote_on定义启动auto_promote_on docker build管道的条件。 在这种情况下,它将在semaphore.yml文件中定义的所有作业都通过后运行。

To test the new pipeline, you need to add, commit, and push all the modified files to GitHub:

要测试新管道,您需要添加,提交所有修改后的文件并将其推送到GitHub:

  • git add Dockerfile

    git添加Dockerfile
  • git add .semaphore/docker-build.yml

    git添加.semaphore / docker-build.yml
  • git add .semaphore/semaphore.yml

    git添加.semaphore / semaphore.yml
  • git commit -m "docker build pipeline"

    git commit -m“ docker构建管道”
  • git push origin master

    git push origin master

After the CI pipeline is complete, the Docker build pipeline starts.

CI管道完成后,Docker构建管道将启动。

When it finishes, you’ll see your new image in your Docker Hub repository.

完成后,您将在Docker Hub存储库中看到新映像。

You’ve got your build process testing and creating the image. Now you’ll create the final pipeline to deploy the application to your Kubernetes cluster.

您已经进行了构建过程测试和创建映像。 现在,您将创建最终管道,以将应用程序部署到您的Kubernetes集群。

步骤6 —设置持续部署到Kubernetes (Step 6 — Setting up Continuous Deployment to Kubernetes)

The building block of a Kubernetes deployment is the pod. A pod is a group of containers that are managed as a single unit. The containers inside a pod start and stop in unison and always run on the same machine, sharing its resources. Each pod has an IP address. In this case, the pods will only have one container.

Kubernetes部署的构建模块是pod 。 吊舱是一组作为单个单元管理的容器。 容器内的容器一致地启动和停止,并且始终在同一台机器上运行,共享其资源。 每个Pod都有一个IP地址。 在这种情况下,吊舱将只有一个容器。

Pods are ephemeral; they are created and destroyed frequently. You can’t tell which IP address is going to be assigned to each pod until it’s started. To solve this, you’ll use services, which have fixed public IP addresses so incoming connections can be load-balanced and forwarded to the pods.

豆荚是短暂的; 它们是经常创建和销毁的。 在启动之前,您无法确定将哪个IP地址分配给每个Pod。 为了解决这个问题,您将使用具有固定公共IP地址的服务 ,以便可以平衡传入连接的负载并将其转发到Pod。

You could manage pods directly, but it’s better to let Kubernetes handle that by using a deployment. In this section, you will create a declarative manifest that describes the final desired state for your cluster. The manifest has two resources:

您可以直接管理Pod,但是最好让Kubernetes通过使用部署来处理 。 在本节中,您将创建一个声明性清单,该清单描述集群的最终所需状态。 清单有两个资源:

  • Deployment: starts the pods in the cluster nodes as required and keeps track of their status. Since in this tutorial we’re using a 3-node cluster, we’ll deploy 3 pods.

    部署:根据需要启动群集节点中的Pod,并跟踪其状态。 由于在本教程中,我们使用3节点集群,因此我们将部署3个Pod。
  • Service: acts as an entry point for our users. Listens to traffic on port 80 (HTTP) and forwards the connection to the pods.

    服务:是我们用户的切入点。 在端口80 (HTTP)上侦听流量,并将连接转发到Pod。

Create a manifest file called deployment.yml:

创建一个名为deployment.yml的清单文件:

  • nano deployment.yml

    纳米部署

Start the manifest with the Deployment resource. Add the following contents to the new file to define the deployment:

Deployment资源启动清单。 将以下内容添加到新文件中以定义部署:

deployment.yml
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: addressbook
spec:
  replicas: 3
  selector:
    matchLabels:
      app: addressbook
  template:
    metadata:
      labels:
        app: addressbook
    spec:
      containers:
        - name: addressbook
          image: ${DOCKER_USERNAME}/addressbook:${SEMAPHORE_WORKFLOW_ID}
          env:
            - name: NODE_ENV
              value: "production"
            - name: PORT
              value: "$PORT"
            - name: DB_SCHEMA
              value: "$DB_SCHEMA"
            - name: DB_USER
              value: "$DB_USER"
            - name: DB_PASSWORD
              value: "$DB_PASSWORD"
            - name: DB_HOST
              value: "$DB_HOST"
            - name: DB_PORT
              value: "$DB_PORT"
            - name: DB_SSL
              value: "$DB_SSL"


. . .

For each resource in the manifest, you need to set an apiVersion. For deployments, use apiVersion: apps/v1, a stable version. Then, tell Kubernetes that this resource is a Deployment with kind: Deployment. Each definition should have a name defined in metadata.name.

对于清单中的每个资源,您需要设置一个apiVersion 。 对于部署,请使用apiVersion: apps/v1 (稳定版本)。 然后,告诉Kubernetes该资源是具有以下kind: Deployment的Deployment kind: Deployment 。 每个定义都应在metadata.name定义一个名称。

In the spec section you tell Kubernetes what the desired final state is. This definition requests that Kubernetes should create 3 pods with replicas: 3.

spec部分中,您告诉Kubernetes所需的最终状态是什么。 该定义要求Kubernetes应该创建3个带有replicas: 3 Pod replicas: 3

Labels are key-value pairs used to organize and cross-reference Kubernetes resources. You define labels with metadata.labels, and you can look for matching labels with selector.matchLabels. This is how you connect elements togther.

标签是用于组织和交叉引用Kubernetes资源的键值对。 您可以使用metadata.labels定义标签,并可以使用selector.matchLabels寻找匹配的标签。 这就是将元素连接在一起的方式。

The key spec.template defines a model that Kubernetes will use to create each pod. Inside spec.template.metadata.labels you set one label for the pods: app: addressbook.

关键的spec.template定义了一个模型,Kubernetes将使用该模型来创建每个Pod。 在spec.template.metadata.labels内部,您可以为Pod设置一个标签: app: addressbook

With spec.selector.matchLabels you make the deployment manage any pods with the label app: addressbook. In this case you are making this deployment responsible for all the pods.

使用spec.selector.matchLabels可以使部署使用标签app: addressbook管理任何Pod。 在这种情况下,您要使此部署负责所有Pod。

Finally, you define the image that runs in the pods. In spec.template.spec.containers you set the image name. Kubernetes will pull the image from the registry as needed. In this case, it will pull from Docker Hub). You can also set environment variables for the containers, which is fortunate because you need to supply several values for the database connection.

最后,您定义在容器中运行的图像。 在spec.template.spec.containers设置图像名称。 Kubernetes将根据需要从注册表中提取映像。 在这种情况下,它将从Docker Hub中拉出。 您还可以为容器设置环境变量,这很幸运,因为您需要为数据库连接提供多个值。

To keep the deployment manifest flexible, you’ll be relying on variables. The YAML format, however, doesn’t allow variables, so the file isn’t valid yet. You’ll solve that problem when you define the deployment pipeline for Semaphore.

为了保持部署清单的灵活性,您将依赖变量。 但是,YAML格式不允许使用变量,因此该文件尚未生效。 在为信号量定义部署管道时,将解决该问题。

That’s it for the deployment. But this only defines the pods. You still need a service that will allow traffic to flow to your pods. You can add another Kubernetes resource in the same file as long as you use three hyphens (---) as a separator.

That's it for the deployment. But this only defines the pods. You still need a service that will allow traffic to flow to your pods. You can add another Kubernetes resource in the same file as long as you use three hyphens ( --- ) as a separator.

Add the following code to define a load balancer service that connects to pods with the addressbook label:

Add the following code to define a load balancer service that connects to pods with the addressbook label:

deployment.yml
deployment.yml
. . .

---

apiVersion: v1
kind: Service
metadata:
  name: addressbook-lb
spec:
  selector:
    app: addressbook
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 3000

The load balancer will receive connections on port 80 and forward them to the pods’ port 3000 where the application is listening.

The load balancer will receive connections on port 80 and forward them to the pods' port 3000 where the application is listening.

Save the file.

保存文件。

Now, create a deployment pipeline for Semaphore that will deploy the app using the manifest. Create a new file in the .semaphore directory:

Now, create a deployment pipeline for Semaphore that will deploy the app using the manifest. Create a new file in the .semaphore directory:

  • nano .semaphore/deploy-k8s.yml

    nano .semaphore/deploy-k8s.yml

Begin the pipeline as usual, specifying the version, name, and image:

Begin the pipeline as usual, specifying the version, name, and image:

.semaphore/deploy-k8s.yml
.semaphore/deploy-k8s.yml
version: v1.0
name: Deploy to Kubernetes
agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804

. . .

This pipeline will have two blocks. The first block deploys the application to the Kubernetes cluster.

This pipeline will have two blocks. The first block deploys the application to the Kubernetes cluster.

Define the block and import all the secrets:

Define the block and import all the secrets:

.semaphore/deploy-k8s.yml
.semaphore/deploy-k8s.yml
. . .

blocks:
  - name: Deploy to Kubernetes
    task:
      secrets:
        - name: dockerhub
        - name: do-access-token
        - name: env-production

. . .

Store your DigitalOcean Kubernetes cluster name in an environment variable so you can reference it later:

Store your DigitalOcean Kubernetes cluster name in an environment variable so you can reference it later:

.semaphore/deploy-k8s.yml
.semaphore/deploy-k8s.yml
. . .

      env_vars:
        - name: CLUSTER_NAME
          value: addressbook-server

. . .

DigitalOcean Kubernetes clusters are managed with a combination of two programs: kubectl and doctl. The former is already included in Semaphore’s image, but the latter isn’t, so you need to install it. You can use the prologue section to do it.

DigitalOcean Kubernetes clusters are managed with a combination of two programs: kubectl and doctl . The former is already included in Semaphore's image, but the latter isn't, so you need to install it. You can use the prologue section to do it.

Add this prologue section:

Add this prologue section:

.semaphore/deploy-k8s.yml
.semaphore/deploy-k8s.yml
. . .

      prologue:
        commands:
          - wget https://github.com/digitalocean/doctl/releases/download/v1.20.0/doctl-1.20.0-linux-amd64.tar.gz
          - tar xf doctl-1.20.0-linux-amd64.tar.gz 
          - sudo cp doctl /usr/local/bin
          - doctl auth init --access-token $DO_ACCESS_TOKEN
          - doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
          - checkout

. . .

The first command downloads the doctl official release with wget. The second command decompresses it with tar and copies it into the local path. Once doctl is installed, it can be used to authenticate with the DigitalOcean API and request the Kubernetes config file for our cluster. After checking out our code, we are done with the prologue:

The first command downloads the doctl official release with wget . The second command decompresses it with tar and copies it into the local path. Once doctl is installed, it can be used to authenticate with the DigitalOcean API and request the Kubernetes config file for our cluster. After checking out our code, we are done with the prologue :

Next comes the final piece of our pipeline: deploying to the cluster.

Next comes the final piece of our pipeline: deploying to the cluster.

Remember that there were some environment variables in deployment.yml, and YAML does not allow that. As a result, deployment.yml in its current state, won’t work. To get around that, source the environment file to load the variables, then use the envsubst command to expand the variables in-place with the actual values. The result, a file called deploy.yml, is entirely valid YAML with the values inserted. With the file in place, you can start the deployment with kubectl apply:

Remember that there were some environment variables in deployment.yml , and YAML does not allow that. As a result, deployment.yml in its current state, won't work. To get around that, source the environment file to load the variables, then use the envsubst command to expand the variables in-place with the actual values. The result, a file called deploy.yml , is entirely valid YAML with the values inserted. With the file in place, you can start the deployment with kubectl apply :

.semaphore/deploy-k8s.yml
.semaphore/deploy-k8s.yml
. . .

      jobs:
      - name: Deploy
        commands:
          - source $HOME/env-production
          - envsubst < deployment.yml | tee deploy.yml
          - kubectl apply -f deploy.yml

. . .

The second block adds the latest tag to the image on Docker Hub to denote that this is the most current version deployed. Repeat the Docker login steps, then pull, retag, and push to Docker Hub:

The second block adds the latest tag to the image on Docker Hub to denote that this is the most current version deployed. Repeat the Docker login steps, then pull, retag, and push to Docker Hub:

.semaphore/deploy-k8s.yml
.semaphore/deploy-k8s.yml
. . .

  - name: Tag latest release
    task:
      secrets:
        - name: dockerhub
      prologue:
        commands:
          - checkout
          - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
          - checkout
      jobs:
      - name: docker tag latest
        commands:
          - docker pull "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" 
          - docker tag "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" "${DOCKER_USERNAME}/addressbook:latest"
          - docker push "${DOCKER_USERNAME}/addressbook:latest"

Save the file.

保存文件。

This pipeline performs the deployment, but it can only start if the Docker image was successfully generated and pushed to Docker Hub. As a result, you must connect the build and deployment pipelines with a promotion. Edit the Docker build pipeline to add it:

This pipeline performs the deployment, but it can only start if the Docker image was successfully generated and pushed to Docker Hub. As a result, you must connect the build and deployment pipelines with a promotion. Edit the Docker build pipeline to add it:

  • nano .semaphore/docker-build.yml

    nano .semaphore/docker-build.yml

Add the promotion to the end of the file:

Add the promotion to the end of the file:

.semaphore/docker-build.yml
.semaphore/docker-build.yml
. . .

promotions:
  - name: Deploy to Kubernetes
    pipeline_file: deploy-k8s.yml
    auto_promote_on:
      - result: passed

You are done setting up the CI/CD workflow.

You are done setting up the CI/CD workflow.

All that remains is pushing the modified files and letting Semaphore do the work. Add, commit, and push your repository’s changes:

All that remains is pushing the modified files and letting Semaphore do the work. Add, commit, and push your repository's changes:

  • git add .semaphore/deploy-k8s.yml

    git add .semaphore/deploy-k8s.yml
  • git add .semaphore/docker-build.yml

    git add .semaphore/docker-build.yml
  • git add deployment.yml

    git add deployment.yml
  • git commit -m "kubernetes deploy pipeline"

    git commit -m "kubernetes deploy pipeline"
  • git push origin master

    git push origin master

It’ll take a few minutes for the deployment to complete.

It'll take a few minutes for the deployment to complete.

Let’s test the application next.

Let's test the application next.

Step 7 — Testing the Application (Step 7 — Testing the Application)

At this point, the application is up and running. In this step, you’ll use curl to test the API endpoint.

At this point, the application is up and running. In this step, you'll use curl to test the API endpoint.

You’ll need to know the public IP that DigitalOcean has given to your cluster. Follow these steps to find it:

You'll need to know the public IP that DigitalOcean has given to your cluster. Follow these steps to find it:

  1. Log in to your DigitalOcean account.

    Log in to your DigitalOcean account.
  2. Select the addressbook project

    Select the addressbook project
  3. Go to Networking.

    Go to Networking .

  4. Click on Load Balancers.

    Click on Load Balancers .

  5. The IP Address is shown. Copy the IP address.

    The IP Address is shown. Copy the IP address.

Let’s check the /all route using curl:

Let's check the /all route using curl :

  • curl -w "\n" YOUR_CLUSTER_IP/all

    curl -w "\n" YOUR_CLUSTER_IP /all

You can use the -w "\n" option to ensure curl prints all lines:

You can use the -w "\n" option to ensure curl prints all lines:

Since there are no records in the database yet, you get an empty JSON array as the result:

Since there are no records in the database yet, you get an empty JSON array as the result:


   
   
Output
[]

Create a new person record by making a PUT request to the /person endpoint:

Create a new person record by making a PUT request to the /person endpoint:

  • curl -w "\n" -X PUT \

    curl -w "\n" -X PUT \
  • -d "firstName=Sammy&lastName=the Shark" YOUR_CLUSTER_IP/person

    -d "firstName=Sammy&lastName=the Shark" YOUR_CLUSTER_IP /person

The API returns the JSON object for the person:

The API returns the JSON object for the person:


   
   
Output
{ "id": 1, "firstName": "Sammy", "lastName": "the Shark", "updatedAt": "2019-07-04T23:51:00.548Z", "createdAt": "2019-07-04T23:51:00.548Z" }

Create a second person:

Create a second person:

  • curl -w "\n" -X PUT \

    curl -w "\n" -X PUT \
  • -d "firstName=Tommy&lastName=the Octopus" YOUR_CLUSTER_IP/person

    -d "firstName=Tommy&lastName=the Octopus" YOUR_CLUSTER_IP /person

The output indicates that a second person was created:

The output indicates that a second person was created:


   
   
Output
{ "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "updatedAt": "2019-07-04T23:52:08.724Z", "createdAt": "2019-07-04T23:52:08.724Z" }

Now make a GET request to get the person with the id of 2:

Now make a GET request to get the person with the id of 2 :

  • curl -w "\n" YOUR_CLUSTER_IP/person/2

    curl -w "\n" YOUR_CLUSTER_IP /person/2

The server replies with the data you requested:

The server replies with the data you requested:


   
   
Output
{ "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "createdAt": "2019-07-04T23:52:08.724Z", "updatedAt": "2019-07-04T23:52:08.724Z" }

To delete the person, send a DELETE request:

To delete the person, send a DELETE request:

  • curl -w "\n" -X DELETE YOUR_CLUSTER_IP/person/2

    curl -w "\n" -X DELETE YOUR_CLUSTER_IP /person/2

No output is returned by this command.

No output is returned by this command.

You should only have one person in your database, the one with the id of 1. Try getting /all again:

You should only have one person in your database, the one with the id of 1 . Try getting /all again:

  • curl -w "\n" YOUR_CLUSTER_IP/all

    curl -w "\n" YOUR_CLUSTER_IP /all

The server replies with an array of persons containing only one record:

The server replies with an array of persons containing only one record:


   
   
Output
[ { "id": 1, "firstName": "Sammy", "lastName": "the Shark", "createdAt": "2019-07-04T23:51:00.548Z", "updatedAt": "2019-07-04T23:51:00.548Z" } ]

At this point, there’s only one person left in the database.

At this point, there's only one person left in the database.

This completes the tests for all the endpoints in our application and marks the end of the tutorial.

This completes the tests for all the endpoints in our application and marks the end of the tutorial.

结论 (Conclusion)

In this tutorial, you wrote a complete Node.js application from scratch which used DigitalOcean’s managed PostgreSQL database service. You then used Semaphore’s CI/CD pipelines to fully automate a workflow that tested and built a container image, uploaded it to Docker Hub, and deployed it to DigitalOcean Kubernetes.

In this tutorial, you wrote a complete Node.js application from scratch which used DigitalOcean's managed PostgreSQL database service. You then used Semaphore's CI/CD pipelines to fully automate a workflow that tested and built a container image, uploaded it to Docker Hub, and deployed it to DigitalOcean Kubernetes.

To learn more about Kubernetes, you can read An Introduction to Kubernetes and the rest of DigitalOcean’s Kubernetes tutorials.

To learn more about Kubernetes, you can read An Introduction to Kubernetes and the rest of DigitalOcean's Kubernetes tutorials .

Now that your application is deployed, you may consider adding a domain name, securing your database cluster, or setting up alerts for your database.

Now that your application is deployed, you may consider adding a domain name , securing your database cluster , or setting up alerts for your database .

翻译自: https://www.digitalocean.com/community/tutorials/how-to-build-and-deploy-a-node-js-application-to-digitalocean-kubernetes-using-semaphore-continuous-integration-and-delivery

Logo

瓜分20万奖金 获得内推名额 丰厚实物奖励 易参与易上手

更多推荐