如何在Ubuntu 20.04上安装Elasticsearch,Logstash和Kibana(弹性堆栈)

news/2024/7/5 6:54:41

介绍 (Introduction)

The Elastic Stack — formerly known as the ELK Stack — is a collection of open-source software produced by Elastic which allows you to search, analyze, and visualize logs generated from any source in any format, a practice known as centralized logging. Centralized logging can be useful when attempting to identify problems with your servers or applications as it allows you to search through all of your logs in a single place. It’s also useful because it allows you to identify issues that span multiple servers by correlating their logs during a specific time frame.

Elastic Stack(以前称为ELK Stack )是Elastic产生的开源软件的集合,它使您可以搜索,分析和可视化从任何来源以任何格式生成的日志,这种做法称为集中式日志记录 。 集中式日志记录在尝试确定服务器或应用程序出现问题时非常有用,因为它使您可以在一个地方搜索所有日志。 它也很有用,因为它允许您通过在特定时间段内关联多个服务器的日志来识别跨越多个服务器的问题。

The Elastic Stack has four main components:

弹性堆栈具有四个主要组件:

  • Elasticsearch: a distributed RESTful search engine which stores all of the collected data.

    Elasticsearch :分布式RESTful搜索引擎,用于存储所有收集的数据。

  • Logstash: the data processing component of the Elastic Stack which sends incoming data to Elasticsearch.

    Logstash :Elastic Stack的数据处理组件,用于将传入的数据发送到Elasticsearch。

  • Kibana: a web interface for searching and visualizing logs.

    Kibana :用于搜索和可视化日志的Web界面。

  • Beats: lightweight, single-purpose data shippers that can send data from hundreds or thousands of machines to either Logstash or Elasticsearch.

    Beats :轻型,单一用途的数据发送器,可以将数百或数千台计算机中的数据发送到Logstash或Elasticsearch。

In this tutorial, you will install the Elastic Stack on an Ubuntu 20.04 server. You will learn how to install all of the components of the Elastic Stack — including Filebeat, a Beat used for forwarding and centralizing logs and files — and configure them to gather and visualize system logs. Additionally, because Kibana is normally only available on the localhost, we will use Nginx to proxy it so it will be accessible over a web browser. We will install all of these components on a single server, which we will refer to as our Elastic Stack server.

在本教程中,您将在Ubuntu 20.04服务器上安装Elastic Stack 。 您将学习如何安装Elastic Stack的所有组件,包括Filebeat (用于转发和集中化日志和文件的Beat),以及如何配置它们以收集和可视化系统日志。 另外,由于Kibana通常仅在localhost上可用,我们将使用Nginx对其进行代理,以便可以通过Web浏览器对其进行访问。 我们将所有这些组件安装在单个服务器上,我们将其称为Elastic Stack服务器

Note: When installing the Elastic Stack, you must use the same version across the entire stack. In this tutorial we will install the latest versions of the entire stack which are, at the time of this writing, Elasticsearch 7.7.1, Kibana 7.7.1, Logstash 7.7.1, and Filebeat 7.7.1.

注意 :安装弹性堆栈时,必须在整个堆栈中使用相同的版本。 在本教程中,我们将安装整个堆栈的最新版本,在撰写本文时,它们是Elasticsearch 7.7.1,Kibana 7.7.1,Logstash 7.7.1和Filebeat 7.7.1。

先决条件 (Prerequisites)

To complete this tutorial, you will need the following:

要完成本教程,您将需要以下内容:

  • An Ubuntu 20.04 server with 4GB RAM and 2 CPUs set up with a non-root sudo user. You can achieve this by following the Initial Server Setup with Ubuntu 20.04.For this tutorial, we will work with the minimum amount of CPU and RAM required to run Elasticsearch. Note that the amount of CPU, RAM, and storage that your Elasticsearch server will require depends on the volume of logs that you expect.

    带有4GB RAM和2个CPU的Ubuntu 20.04服务器,使用非root用户sudo用户设置。 您可以通过遵循Ubuntu 20.04的初始服务器设置来实现。在本教程中,我们将使用运行Elasticsearch所需的最少CPU和RAM。 请注意,Elasticsearch服务器所需的CPU,RAM和存储量取决于所需的日志量。

  • OpenJDK 11 installed. See the section Installing the Default JRE/JDK How To Install Java with Apt on Ubuntu 20.04 to set this up.

    已安装OpenJDK 11。 请参阅“ 在Ubuntu 20.04上 安装默认JRE / JDK 如何在Apt中安装Java”部分进行设置。

  • Nginx installed on your server, which we will configure later in this guide as a reverse proxy for Kibana. Follow our guide on How to Install Nginx on Ubuntu 20.04 to set this up.

    Nginx安装在您的服务器上,我们将在本指南的后面部分将其配置为Kibana的反向代理。 按照我们的指南如何在Ubuntu 20.04上安装Nginx进行设置。

Additionally, because the Elastic Stack is used to access valuable information about your server that you would not want unauthorized users to access, it’s important that you keep your server secure by installing a TLS/SSL certificate. This is optional but strongly encouraged.

此外,由于Elastic Stack用于访问您不希望未经授权的用户访问的有关服务器的重要信息,因此,通过安装TLS / SSL证书来确保服务器的安全非常重要。 这是可选的,但强烈建议这样做

However, because you will ultimately make changes to your Nginx server block over the course of this guide, it would likely make more sense for you to complete the Let’s Encrypt on Ubuntu 20.04 guide at the end of this tutorial’s second step. With that in mind, if you plan to configure Let’s Encrypt on your server, you will need the following in place before doing so:

但是,由于您最终将在本指南的过程中对Nginx服务器块进行更改,因此,在本教程第二步结束时,完成“在Ubuntu 20.04上加密我们的加密”指南可能对您更有意义。 考虑到这一点,如果您打算在服务器上配置“让我们加密”,则需要先进行以下准备:

  • A fully qualified domain name (FQDN). This tutorial will use your_domain throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.

    完全限定的域名(FQDN)。 本教程将整个使用your_domain 。 你可以购买一个域名Namecheap ,免费获得一个在Freenom ,或使用你选择的域名注册商。

  • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.

    为服务器设置了以下两个DNS记录。 您可以按照DigitalOcean DNS简介进行操作,以获取有关如何添加它们的详细信息。

    • An A record with your_domain pointing to your server’s public IP address.

      A记录,其中your_domain指向服务器的公共IP地址。

    • An A record with www.your_domain pointing to your server’s public IP address.

      www. your_domain的A记录www. your_domain www. your_domain指向服务器的公共IP地址。

步骤1 —安装和配置Elasticsearch (Step 1 — Installing and Configuring Elasticsearch)

The Elasticsearch components are not available in Ubuntu’s default package repositories. They can, however, be installed with APT after adding Elastic’s package source list.

Elasticsearch组件在Ubuntu的默认软件包存储库中不可用。 但是,可以在添加Elastic的软件包源列表之后将它们与APT一起安装。

All of the packages are signed with the Elasticsearch signing key in order to protect your system from package spoofing. Packages which have been authenticated using the key will be considered trusted by your package manager. In this step, you will import the Elasticsearch public GPG key and add the Elastic package source list in order to install Elasticsearch.

所有软件包都使用Elasticsearch签名密钥签名,以保护您的系统免受软件包欺骗的侵害。 使用密钥进行了身份验证的软件包将被您的软件包管理器视为受信任的软件包。 在此步骤中,您将导入Elasticsearch公共GPG密钥并添加Elastic软件包源列表,以便安装Elasticsearch。

To begin, use cURL, the command line tool for transferring data with URLs, to import the Elasticsearch public GPG key into APT. Note that we are using the arguments -fsSL to silence all progress and possible errors (except for a server failure) and to allow cURL to make a request on a new location if redirected. Pipe the output of the cURL command into the apt-key program, which adds the public GPG key to APT.

首先,使用cURL(用于通过URL传输数据的命令行工具)将Elasticsearch公共GPG密钥导入APT。 请注意,我们正在使用参数-fsSL来使所有进度和可能的错误(服务器故障除外)保持沉默,并允许cURL重定向后在新位置上发出请求。 将cURL命令的输出传递到apt-key程序中,该程序将公共GPG密钥添加到APT。

  • curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

    curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt键添加-

Next, add the Elastic source list to the sources.list.d directory, where APT will search for new sources:

接下来,将Elastic source列表添加到sources.list.d目录,APT将在其中搜索新的源:

  • echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

    回声“ deb https://artifacts.elastic.co/packages/7.x/apt稳定主” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

Next, update your package lists so APT will read the new Elastic source:

接下来,更新您的软件包列表,以便APT读取新的Elastic源:

  • sudo apt update

    sudo apt更新

Then install Elasticsearch with this command:

然后使用以下命令安装Elasticsearch:

  • sudo apt install elasticsearch

    sudo apt安装elasticsearch

Elasticsearch is now installed and ready to be configured. Use your preferred text editor to edit Elasticsearch’s main configuration file, elasticsearch.yml. Here, we’ll use nano:

现在已安装Elasticsearch并准备对其进行配置。 使用您喜欢的文本编辑器来编辑Elasticsearch的主要配置文件elasticsearch.yml 。 在这里,我们将使用nano

  • sudo nano /etc/elasticsearch/elasticsearch.yml

    须藤纳米/etc/elasticsearch/elasticsearch.yml

Note: Elasticsearch’s configuration file is in YAML format, which means that we need to maintain the indentation format. Be sure that you do not add any extra spaces as you edit this file.

注意: Elasticsearch的配置文件为YAML格式,这意味着我们需要维护缩进格式。 确保在编辑此文件时不要添加任何多余的空格。

The elasticsearch.yml file provides configuration options for your cluster, node, paths, memory, network, discovery, and gateway. Most of these options are preconfigured in the file but you can change them according to your needs. For the purposes of our demonstration of a single-server configuration, we will only adjust the settings for the network host.

elasticsearch.yml文件提供集群,节点,路径,内存,网络,发现和网关的配置选项。 这些选项中的大多数已在文件中预先配置,但是您可以根据需要进行更改。 为了演示单服务器配置,我们将仅调整网络主机的设置。

Elasticsearch listens for traffic from everywhere on port 9200. You will want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data or shutting down your Elasticsearch cluster through its REST API. To restrict access and therefore increase security, find the line that specifies network.host, uncomment it, and replace its value with localhost like this:

Elasticsearch在9200端口上监听来自各处的流量。 您将希望限制对Elasticsearch实例的外部访问,以防止外部人员通过其REST API读取数据或关闭Elasticsearch集群。 要限制访问并因此提高安全性,请找到指定network.host的行,取消注释,然后将其值替换为localhost如下所示:

/etc/elasticsearch/elasticsearch.yml
/etc/elasticsearch/elasticsearch.yml
. . .
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
. . .

We have specified localhost so that Elasticsearch listens on all interfaces and bound IPs. If you want it to listen only on a specific interface, you can specify its IP in place of localhost. Save and close elasticsearch.yml. If you’re using nano, you can do so by pressing CTRL+X, followed by Y and then ENTER .

我们指定了localhost以便Elasticsearch侦听所有接口和绑定的IP。 如果希望它仅在特定接口上侦听,则可以指定其IP代替localhost 。 保存并关闭elasticsearch.yml 。 如果您使用的是nano ,则可以先按CTRL+X ,再按Y ,然后按ENTER

These are the minimum settings you can start with in order to use Elasticsearch. Now you can start Elasticsearch for the first time.

这些是您可以使用Elasticsearch开始的最低设置。 现在,您可以第一次启动Elasticsearch。

Start the Elasticsearch service with systemctl. Give Elasticsearch a few moments to start up. Otherwise, you may get errors about not being able to connect.

使用systemctl启动Elasticsearch服务。 给Elasticsearch一会儿启动时间。 否则,您可能会收到有关无法连接的错误信息。

  • sudo systemctl start elasticsearch

    sudo systemctl启动elasticsearch

Next, run the following command to enable Elasticsearch to start up every time your server boots:

接下来,运行以下命令以使Elasticsearch在每次服务器启动时启动:

  • sudo systemctl enable elasticsearch

    sudo systemctl启用elasticsearch

You can test whether your Elasticsearch service is running by sending an HTTP request:

您可以通过发送HTTP请求来测试Elasticsearch服务是否正在运行:

  • curl -X GET "localhost:9200"

    curl -X GET“ localhost:9200”

You will see a response showing some basic information about your local node, similar to this:

您将看到一个响应,显示有关本地节点的一些基本信息,类似于:


   
Output
{ "name" : "Elasticsearch", "cluster_name" : "elasticsearch", "cluster_uuid" : "qqhFHPigQ9e2lk-a7AvLNQ", "version" : { "number" : "7.7.1", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f", "build_date" : "2020-03-26T06:34:37.794943Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }

Now that Elasticsearch is up and running, let’s install Kibana, the next component of the Elastic Stack.

现在Elasticsearch已启动并正在运行,让我们安装Elastic Stack的下一个组件Kibana。

第2步-安装和配置Kibana仪表板 (Step 2 — Installing and Configuring the Kibana Dashboard)

According to the official documentation, you should install Kibana only after installing Elasticsearch. Installing in this order ensures that the components each product depends on are correctly in place.

根据官方文档 ,仅应在安装Elasticsearch之后安装Kibana。 按此顺序安装可确保正确安装每个产品所依赖的组件。

Because you’ve already added the Elastic package source in the previous step, you can just install the remaining components of the Elastic Stack using apt:

因为您已经在上一步中添加了Elastic软件包源,所以您可以使用apt安装Elastic Stack的其余组件:

  • sudo apt install kibana

    须藤apt install kibana

Then enable and start the Kibana service:

然后启用并启动Kibana服务:

  • sudo systemctl enable kibana

    sudo systemctl启用kibana
  • sudo systemctl start kibana

    sudo systemctl启动kibana

Because Kibana is configured to only listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose, which should already be installed on your server.

因为Kibana配置为仅在localhost侦听,所以我们必须设置一个反向代理以允许对其进行外部访问。 我们将为此使用Nginx,它应该已经安装在您的服务器上。

First, use the openssl command to create an administrative Kibana user which you’ll use to access the Kibana web interface. As an example we will name this account kibanaadmin, but to ensure greater security we recommend that you choose a non-standard name for your user that would be difficult to guess.

首先,使用openssl命令创建一个管理Kibana用户,您将使用该用户访问Kibana Web界面。 例如,我们将这个帐户kibanaadmin ,但是为了确保更高的安全性,我们建议您为用户选择一个难以猜测的非标准名称。

The following command will create the administrative Kibana user and password, and store them in the htpasswd.users file. You will configure Nginx to require this username and password and read this file momentarily:

以下命令将创建管理性Kibana用户名和密码,并将其存储在htpasswd.users文件中。 您将配置Nginx要求此用户名和密码,并立即读取此文件:

  • echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

    回声“ kibanaadmin :`openssl passwd -apr1`” | sudo tee -a /etc/nginx/htpasswd.users

Enter and confirm a password at the prompt. Remember or take note of this login, as you will need it to access the Kibana web interface.

在提示符下输入并确认密码。 请记住或记下此登录名,因为您需要它才能访问Kibana Web界面。

Next, we will create an Nginx server block file. As an example, we will refer to this file as your_domain, although you may find it helpful to give yours a more descriptive name. For instance, if you have a FQDN and DNS records set up for this server, you could name this file after your FQDN.

接下来,我们将创建一个Nginx服务器块文件。 作为示例,我们将此文件称为your_domain ,尽管您可能会发现为您提供一个更具描述性的名称会有所帮助。 例如,如果您为此服务器设置了FQDN和DNS记录,则可以在FQDN之后命名该文件。

Using nano or your preferred text editor, create the Nginx server block file:

使用nano或您喜欢的文本编辑器,创建Nginx服务器块文件:

  • sudo nano /etc/nginx/sites-available/your_domain

    须藤纳米/ etc / nginx / sites-available / your_domain

Add the following code block into the file, being sure to update your_domain to match your server’s FQDN or public IP address. This code configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601. Additionally, it configures Nginx to read the htpasswd.users file and require basic authentication.

将以下代码块添加到文件中,确保更新your_domain以匹配服务器的FQDN或公共IP地址。 此代码将Nginx配置为将服务器的HTTP通信定向到正在监听localhost:5601的Kibana应用程序。 此外,它将Nginx配置为读取htpasswd.users文件并要求基本身份验证。

Note that if you followed the prerequisite Nginx tutorial through to the end, you may have already created this file and populated it with some content. In that case, delete all the existing content in the file before adding the following:

请注意,如果您按照先决条件Nginx教程进行了学习,则可能已经创建了该文件,并在其中填充了一些内容。 在这种情况下,请删除文件中所有现有的内容,然后再添加以下内容:

/etc/nginx/sites-available/your_domain
/ etc / nginx / sites-available / your_domain
server {
    listen 80;

    server_name your_domain;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

When you’re finished, save and close the file.

完成后,保存并关闭文件。

Next, enable the new configuration by creating a symbolic link to the sites-enabled directory. If you already created a server block file with the same name in the Nginx prerequisite, you do not need to run this command:

接下来,通过创建指向sites-enabled目录的符号链接来启用新配置。 如果已经在Nginx前提条件中创建了具有相同名称的服务器阻止文件,则无需运行以下命令:

  • sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/your_domain

    sudo ln -s / etc / nginx / sites-available / your_domain / etc / nginx / sites-enabled / your_domain

Then check the configuration for syntax errors:

然后检查配置中的语法错误:

  • sudo nginx -t

    须藤Nginx -t

If any errors are reported in your output, go back and double check that the content you placed in your configuration file was added correctly. Once you see syntax is ok in the output, go ahead and restart the Nginx service:

如果输出中报告任何错误,请返回并再次检查是否正确添加了放置在配置文件中的内容。 一旦看到输出中的syntax is ok ,请继续并重新启动Nginx服务:

  • sudo systemctl reload nginx

    须藤systemctl重新加载nginx

If you followed the initial server setup guide, you should have a UFW firewall enabled. To allow connections to Nginx, we can adjust the rules by typing:

如果遵循初始服务器安装指南,则应启用UFW防火墙。 要允许连接到Nginx,我们可以通过输入以下内容来调整规则:

  • sudo ufw allow 'Nginx Full'

    sudo ufw允许'Nginx Full'

Note: If you followed the prerequisite Nginx tutorial, you may have created a UFW rule allowing the Nginx HTTP profile through the firewall. Because the Nginx Full profile allows both HTTP and HTTPS traffic through the firewall, you can safely delete the rule you created in the prerequisite tutorial. Do so with the following command:

注意:如果您遵循必备的Nginx教程,则可能已经创建了UFW规则,该规则允许Nginx HTTP配置文件通过防火墙。 由于Nginx Full配置文件同时允许HTTP和HTTPS流量通过防火墙,因此您可以安全地删除在先决条件教程中创建的规则。 使用以下命令执行此操作:

  • sudo ufw delete allow 'Nginx HTTP'

    sudo ufw delete allow'Nginx HTTP'

Kibana is now accessible via your FQDN or the public IP address of your Elastic Stack server. You can check the Kibana server’s status page by navigating to the following address and entering your login credentials when prompted:

现在可以通过您的FQDN或Elastic Stack服务器的公共IP地址访问Kibana。 您可以通过导航到以下地址并在出现提示时输入登录凭据来检查Kibana服务器的状态页面:

http://your_domain/status

This status page displays information about the server’s resource usage and lists the installed plugins.

此状态页面显示有关服务器资源使用情况的信息,并列出已安装的插件。

Note: As mentioned in the Prerequisites section, it is recommended that you enable SSL/TLS on your server. You can follow the Let’s Encrypt guide now to obtain a free SSL certificate for Nginx on Ubuntu 20.04. After obtaining your SSL/TLS certificates, you can come back and complete this tutorial.

注意 :如“先决条件”部分所述,建议您在服务器上启用SSL / TLS。 您可以立即遵循“让我们加密”指南,在Ubuntu 20.04上获得Nginx的免费SSL证书。 获得SSL / TLS证书后,您可以返回并完成本教程。

Now that the Kibana dashboard is configured, let’s install the next component: Logstash.

现在已经配置了Kibana仪表板,让我们安装下一个组件:Logstash。

步骤3 —安装和配置Logstash (Step 3 — Installing and Configuring Logstash)

Although it’s possible for Beats to send data directly to the Elasticsearch database, it is common to use Logstash to process the data. This will allow you more flexibility to collect data from different sources, transform it into a common format, and export it to another database.

尽管Beats可以将数据直接发送到Elasticsearch数据库,但通常使用Logstash处理数据。 这将使您更加灵活地从不同来源收集数据,将其转换为通用格式,然后将其导出到另一个数据库。

Install Logstash with this command:

使用以下命令安装Logstash:

  • sudo apt install logstash

    sudo apt安装logstash

After installing Logstash, you can move on to configuring it. Logstash’s configuration files reside in the /etc/logstash/conf.d directory. For more information on the configuration syntax, you can check out the configuration reference that Elastic provides. As you configure the file, it’s helpful to think of Logstash as a pipeline which takes in data at one end, processes it in one way or another, and sends it out to its destination (in this case, the destination being Elasticsearch). A Logstash pipeline has two required elements, input and output, and one optional element, filter. The input plugins consume data from a source, the filter plugins process the data, and the output plugins write the data to a destination.

安装Logstash后,您可以继续进行配置。 Logstash的配置文件位于/etc/logstash/conf.d目录中。 有关配置语法的更多信息,您可以签出Elastic提供的配置参考 。 在配置文件时,将Logstash视为一种管道,它可以在一端接收数据,以一种或另一种方式处理数据,然后将其发送到目的地(在这种情况下,目的地是Elasticsearch),这很有帮助。 Logstash管道具有两个必需元素( inputoutput )和一个可选元素filter 。 输入插件从源消耗数据,过滤器插件处理数据,输出插件将数据写入目标。

Create a configuration file called 02-beats-input.conf where you will set up your Filebeat input:

创建一个名为02-beats-input.conf的配置文件,在其中设置Filebeat输入:

  • sudo nano /etc/logstash/conf.d/02-beats-input.conf

    须藤nano /etc/logstash/conf.d/02-beats-input.conf

Insert the following input configuration. This specifies a beats input that will listen on TCP port 5044.

插入以下input配置。 这指定了一个beats输入,它将在TCP端口5044上侦听。

/etc/logstash/conf.d/02-beats-input.conf
/etc/logstash/conf.d/02-beats-input.conf
input {
  beats {
    port => 5044
  }
}

Save and close the file.

保存并关闭文件。

Next, create a configuration file called 30-elasticsearch-output.conf:

接下来,创建一个名为30-elasticsearch-output.conf的配置文件:

  • sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf

    须藤纳米/etc/logstash/conf.d/30-elasticsearch-output.conf

Insert the following output configuration. Essentially, this output configures Logstash to store the Beats data in Elasticsearch, which is running at localhost:9200, in an index named after the Beat used. The Beat used in this tutorial is Filebeat:

插入以下output配置。 从本质上讲,此输出将Logstash配置为将Beats数据存储在运行在localhost:9200 Elasticsearch中,并以使用Beat命名的索引存储。 本教程中使用的Beat是Filebeat:

/etc/logstash/conf.d/30-elasticsearch-output.conf
/etc/logstash/conf.d/30-elasticsearch-output.conf
output {
  if [@metadata][pipeline] {
    elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    pipeline => "%{[@metadata][pipeline]}"
    }
  } else {
    elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    }
  }
}

Save and close the file.

保存并关闭文件。

Test your Logstash configuration with this command:

使用以下命令测试您的Logstash配置:

  • sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t

    sudo -u logstash / usr / share / logstash / bin / logstash --path.settings / etc / logstash -t

If there are no syntax errors, your output will display Config Validation Result: OK. Exiting Logstash after a few seconds. If you don’t see this in your output, check for any errors noted in your output and update your configuration to correct them. Note that you will receive warnings from OpenJDK, but they should not cause any problems and can be ignored.

如果没有语法错误,您的输出将显示Config Validation Result: OK. Exiting Logstash 几秒钟后Config Validation Result: OK. Exiting Logstash 。 如果您在输出中看不到此错误,请检查输出中是否记录了任何错误,并更新配置以更正它们。 请注意,您将收到来自OpenJDK的警告,但它们不会引起任何问题,可以忽略。

If your configuration test is successful, start and enable Logstash to put the configuration changes into effect:

如果您的配置测试成功,请启动并启用Logstash以使配置更改生效:

  • sudo systemctl start logstash

    须藤systemctl启动logstash
  • sudo systemctl enable logstash

    sudo systemctl启用logstash

Now that Logstash is running correctly and is fully configured, let’s install Filebeat.

现在,Logstash可以正常运行并且已完全配置,让我们安装Filebeat。

步骤4 —安装和配置Filebeat (Step 4 — Installing and Configuring Filebeat)

The Elastic Stack uses several lightweight data shippers called Beats to collect data from various sources and transport them to Logstash or Elasticsearch. Here are the Beats that are currently available from Elastic:

Elastic Stack使用称为Beats的多个轻量级数据托运人从各种来源收集数据并将其传输到Logstash或Elasticsearch。 以下是Elastic当前提供的Beats:

  • Filebeat: collects and ships log files.

    Filebeat :收集并发送日志文件。

  • Metricbeat: collects metrics from your systems and services.

    Metricbeat :从您的系统和服务中收集指标。

  • Packetbeat: collects and analyzes network data.

    Packetbeat :收集和分析网络数据。

  • Winlogbeat: collects Windows event logs.

    Winlogbeat :收集Windows事件日志。

  • Auditbeat: collects Linux audit framework data and monitors file integrity.

    Auditbeat :收集Linux审核框架数据并监视文件完整性。

  • Heartbeat: monitors services for their availability with active probing.

    心跳 :通过主动探测监视服务的可用性。

In this tutorial we will use Filebeat to forward local logs to our Elastic Stack.

在本教程中,我们将使用Filebeat将本地日志转发到我们的Elastic Stack。

Install Filebeat using apt:

使用apt安装Filebeat:

  • sudo apt install filebeat

    sudo apt安装filebeat

Next, configure Filebeat to connect to Logstash. Here, we will modify the example configuration file that comes with Filebeat.

接下来,配置Filebeat以连接到Logstash。 在这里,我们将修改Filebeat随附的示例配置文件。

Open the Filebeat configuration file:

打开Filebeat配置文件:

  • sudo nano /etc/filebeat/filebeat.yml

    须藤纳米/etc/filebeat/filebeat.yml

Note: As with Elasticsearch, Filebeat’s configuration file is in YAML format. This means that proper indentation is crucial, so be sure to use the same number of spaces that are indicated in these instructions.

注意:与Elasticsearch一样,Filebeat的配置文件也是YAML格式。 这意味着正确的缩进至关重要,因此请确保使用这些说明中指示的相同数量的空格。

Filebeat supports numerous outputs, but you’ll usually only send events directly to Elasticsearch or to Logstash for additional processing. In this tutorial, we’ll use Logstash to perform additional processing on the data collected by Filebeat. Filebeat will not need to send any data directly to Elasticsearch, so let’s disable that output. To do so, find the output.elasticsearch section and comment out the following lines by preceding them with a #:

Filebeat支持大量输出,但是您通常只将事件直接发送到Elasticsearch或Logstash进行其他处理。 在本教程中,我们将使用Logstash对Filebeat收集的数据执行附加处理。 Filebeat不需要直接将任何数据发送到Elasticsearch,因此让我们禁用该输出。 为此,请找到output.elasticsearch部分,并在其后加上#注释掉以下几行:

/etc/filebeat/filebeat.yml
/etc/filebeat/filebeat.yml
...
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]
...

Then, configure the output.logstash section. Uncomment the lines output.logstash: and hosts: ["localhost:5044"] by removing the #. This will configure Filebeat to connect to Logstash on your Elastic Stack server at port 5044, the port for which we specified a Logstash input earlier:

然后,配置output.logstash部分。 通过删除#取消注释output.logstash:hosts: ["localhost:5044"]这行。 这会将Filebeat配置为在端口5044上连接到Elastic Stack服务器上的Logstash,端口5044是我们之前为其指定Logstash输入的端口:

/etc/filebeat/filebeat.yml
/etc/filebeat/filebeat.yml
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

Save and close the file.

保存并关闭文件。

The functionality of Filebeat can be extended with Filebeat modules. In this tutorial we will use the system module, which collects and parses logs created by the system logging service of common Linux distributions.

Filebeat的功能可以通过Filebeat模块扩展。 在本教程中,我们将使用系统模块,该模块收集和解析由常见Linux发行版的系统日志服务创建的日志。

Let’s enable it:

让我们启用它:

  • sudo filebeat modules enable system

    sudo filebeat模块启用系统

You can see a list of enabled and disabled modules by running:

您可以通过运行以下命令查看已启用和已禁用模块的列表:

  • sudo filebeat modules list

    sudo filebeat模块列表

You will see a list similar to the following:

您将看到类似于以下内容的列表:


   
Output
Enabled: system Disabled: apache2 auditd elasticsearch icinga iis kafka kibana logstash mongodb mysql nginx osquery postgresql redis traefik ...

By default, Filebeat is configured to use default paths for the syslog and authorization logs. In the case of this tutorial, you do not need to change anything in the configuration. You can see the parameters of the module in the /etc/filebeat/modules.d/system.yml configuration file.

默认情况下,Filebeat配置为对系统日志和授权日志使用默认路径。 在本教程中,您无需更改配置。 您可以在/etc/filebeat/modules.d/system.yml配置文件中查看模块的参数。

Next, we need to set up the Filebeat ingest pipelines, which parse the log data before sending it through logstash to Elasticsearch. To load the ingest pipeline for the system module, enter the following command:

接下来,我们需要设置Filebeat接收管道,该管道在通过logstash发送到Elasticsearch之前解析日志数据。 要为系统模块加载摄取管道,请输入以下命令:

  • sudo filebeat setup --pipelines --modules system

    sudo filebeat setup --pipelines --modules系统

Next, load the index template into Elasticsearch. An Elasticsearch index is a collection of documents that have similar characteristics. Indexes are identified with a name, which is used to refer to the index when performing various operations within it. The index template will be automatically applied when a new index is created.

接下来,将索引模板加载到Elasticsearch中。 Elasticsearch索引是具有相似特征的文档的集合。 索引用名称标识,该名称在其中执行各种操作时用于引用索引。 创建新索引时,索引模板将自动应用。

To load the template, use the following command:

要加载模板,请使用以下命令:

  • sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'

    sudo filebeat设置--index-management -E output.logstash.enabled = false -E'output.elasticsearch.hosts = [“ localhost:9200”]'

   
Output
Index setup finished.

Filebeat comes packaged with sample Kibana dashboards that allow you to visualize Filebeat data in Kibana. Before you can use the dashboards, you need to create the index pattern and load the dashboards into Kibana.

Filebeat随附有示例Kibana仪表板,可让您在Kibana中可视化Filebeat数据。 在使用仪表盘之前,您需要创建索引模式并将仪表盘加载到Kibana中。

As the dashboards load, Filebeat connects to Elasticsearch to check version information. To load dashboards when Logstash is enabled, you need to disable the Logstash output and enable Elasticsearch output:

随着仪表板的加载,Filebeat连接到Elasticsearch以检查版本信息。 要在启用Logstash时加载仪表板,您需要禁用Logstash输出并启用Elasticsearch输出:

  • sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601

    sudo filebeat setup -E output.logstash.enabled = false -E output.elasticsearch.hosts = ['localhost:9200'] -E setup.kibana.host = localhost:5601

You should receive output similar to this:

您应该收到类似于以下的输出:


   
Output
Overwriting ILM policy is disabled. Set `setup.ilm.overwrite:true` for enabling. Index setup finished. Loading dashboards (Kibana must be running and reachable) Loaded dashboards Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead. See more: https://www.elastic.co/guide/en/elastic-stack-overview/current/xpack-ml.html Loaded machine learning job configurations Loaded Ingest pipelines

Now you can start and enable Filebeat:

现在您可以启动并启用Filebeat:

  • sudo systemctl start filebeat

    sudo systemctl开始文件拍
  • sudo systemctl enable filebeat

    sudo systemctl启用filebeat

If you’ve set up your Elastic Stack correctly, Filebeat will begin shipping your syslog and authorization logs to Logstash, which will then load that data into Elasticsearch.

如果您正确设置了Elastic Stack,Filebeat将开始将您的系统日志和授权日志发送到Logstash,然后Logstash将这些数据加载到Elasticsearch中。

To verify that Elasticsearch is indeed receiving this data, query the Filebeat index with this command:

要验证Elasticsearch确实在接收此数据,请使用以下命令查询Filebeat索引:

  • curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

    curl -XGET'http:// localhost:9200 / filebeat-* / _ search?pretty'

You should receive output similar to this:

您应该收到类似于以下的输出:


   
Output
... { { "took" : 4, "timed_out" : false, "_shards" : { "total" : 2, "successful" : 2, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 4040, "relation" : "eq" }, "max_score" : 1.0, "hits" : [ { "_index" : "filebeat-7.7.1-2020.06.04", "_type" : "_doc", "_id" : "FiZLgXIB75I8Lxc9ewIH", "_score" : 1.0, "_source" : { "cloud" : { "provider" : "digitalocean", "instance" : { "id" : "194878454" }, "region" : "nyc1" }, "@timestamp" : "2020-06-04T21:45:03.995Z", "agent" : { "version" : "7.7.1", "type" : "filebeat", "ephemeral_id" : "cbcefb9a-8d15-4ce4-bad4-962a80371ec0", "hostname" : "june-ubuntu-20-04-elasticstack", "id" : "fbd5956f-12ab-4227-9782-f8f1a19b7f32" }, ...

If your output shows 0 total hits, Elasticsearch is not loading any logs under the index you searched for, and you will need to review your setup for errors. If you received the expected output, continue to the next step, in which we will see how to navigate through some of Kibana’s dashboards.

如果您的输出显示总点击数为0,则Elasticsearch不会在搜索的索引下加载任何日志,因此您需要检查设置是否有错误。 如果您收到了预期的输出,请继续执行下一步,在该步骤中,我们将了解如何浏览Kibana的某些仪表板。

第5步-探索Kibana仪表盘 (Step 5 — Exploring Kibana Dashboards)

Let’s return to the Kibana web interface that we installed earlier.

让我们返回之前安装的Kibana Web界面。

In a web browser, go to the FQDN or public IP address of your Elastic Stack server. If your session has been interrupted, you will need to re-enter entering the credentials you defined in Step 2. Once you have logged in, you should receive the Kibana homepage:

在Web浏览器中,转到Elastic Stack服务器的FQDN或公共IP地址。 如果会话已中断,则需要重新输入在步骤2中定义的凭据。登录后,您将获得Kibana主页:

Click the Discover link in the left-hand navigation bar (you may have to click the the Expand icon at the very bottom left to see the navigation menu items). On the Discover page, select the predefined filebeat-* index pattern to see Filebeat data. By default, this will show you all of the log data over the last 15 minutes. You will see a histogram with log events, and some log messages below:

单击左侧导航栏中的“ 发现”链接(您可能必须单击最左下方的“ 展开”图标以查看导航菜单项)。 在“ 发现”页面上,选择预定义的filebeat- *索引模式以查看Filebeat数据。 默认情况下,它将显示过去15分钟内的所有日志数据。 您将在下面看到带有日志事件的直方图,以及一些日志消息:

Here, you can search and browse through your logs and also customize your dashboard. At this point, though, there won’t be much in there because you are only gathering syslogs from your Elastic Stack server.

在这里,您可以搜索和浏览日志,还可以自定义仪表板。 但是,此时并没有太多,因为您只从Elastic Stack服务器收集系统日志。

Use the left-hand panel to navigate to the Dashboard page and search for the Filebeat System dashboards. Once there, you can select the sample dashboards that come with Filebeat’s system module.

使用左侧面板导航到“ 仪表板”页面并搜索Filebeat System仪表板。 到那里后,您可以选择Filebeat的system模块随附的示例仪表板。

For example, you can view detailed stats based on your syslog messages:

例如,您可以根据系统日志消息查看详细的统计信息:

You can also view which users have used the sudo command and when:

您还可以查看哪些用户使用了sudo命令以及何时使用:

Kibana has many other features, such as graphing and filtering, so feel free to explore.

Kibana还具有许多其他功能,例如绘图和过滤,因此请随时进行探索。

结论 (Conclusion)

In this tutorial, you’ve learned how to install and configure the Elastic Stack to collect and analyze system logs. Remember that you can send just about any type of log or indexed data to Logstash using Beats, but the data becomes even more useful if it is parsed and structured with a Logstash filter, as this transforms the data into a consistent format that can be read easily by Elasticsearch.

在本教程中,您学习了如何安装和配置Elastic Stack来收集和分析系统日志。 请记住,您可以使用Beats将几乎任何类型的日志或索引数据发送到Logstash,但是如果使用Logstash过滤器对其进行解析和结构化,则数据将变得更加有用,因为这会将数据转换为可以读取的一致格式Elasticsearch轻松实现。

翻译自: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-20-04


http://www.niftyadmin.cn/n/3648092.html

相关文章

使用Mylyn插件管理bug

今天使用Jira的时候遇到一些问题,搜索的时候无意中发现有一个插件Mylyn可以管理Jira中的bug,于是装了一个。弄了一下午,终于可以使用了。现在把安装的过程写下来,或许对大家有用。第一步,当然是下载插件了。下载地址&a…

如何在Ubuntu 20.04上使用Docker和Caddy远程访问GUI应用程序

介绍 (Introduction) Even with the growing popularity of cloud services, the need for running native applications still exists. 即使云服务越来越流行,仍然需要运行本机应用程序。 By using noVNC and TigerVNC, you can run native applications inside a…

EasyJF与Cownew携手打造BlueFin

作为国内两个比较活跃的开源团队,EasyJF及CowNew都在各自所专注的领域里为开源社区作了不少的贡献。EasyJF开源的EasyJWeb已经推出了1.0m1正式版本,并通过在国内多个大中型项目中的成功应用,充分证明了EasyJWeb是一个优秀并适合快速开发JavaW…

零配置及惯例代替配置

这是[挑战MVC极限]EasyJWeb-1.0特性的第四篇文章,今天主要介绍零配置及惯例代替配置。配置是好还是坏首先,我承认配置是好东西,它能够通过修改程序以外的数据来改变系统的运行性质或功能,大大提高了系统的灵活性,可维护…

如何在Ubuntu 18.04上将Postfix安装和配置为仅发送SMTP服务器

介绍 (Introduction) Postfix is a mail transfer agent (MTA), an application used to send and receive email. It can be configured so that it can be used to send emails by local application only. This is useful in situations when you need to regularly send em…

超级IOC容器SuperContainer

在JavaEE乃至其它的java应用程序中,容器显得非常重要。web容器、applet容器、EJB容器等,可谓容器无处不在。  从程序员的角度来说,IOC容器是一个非常好的东西,他能使得我们非常灵活的管理组件及依赖关系。可以毫不夸张地说&…

如何在Ubuntu 20.04上将Postfix安装和配置为仅发送SMTP服务器

介绍 (Introduction) Postfix is a mail transfer agent (MTA), an application used to send and receive email. It can be configured so that it can be used to send emails by local application only. This is useful in situations when you need to regularly send em…

想动就“动”起来

只要你愿意,严肃规矩的java也同样可以变得“动态”灵活起来。动有动的好处,静有静的好处。俗话说得好,“没有规矩不成方圆”,但“生命诚可贵,爱情价更高,若为自由故,两者皆可抛”。那么作为忙碌…