elasticsearch+kibana搭建配置初体验

最近分析一些日志,搭建elasticsearch+kibana玩一玩。

0x00 环境准备

1
2
3
4
ubuntu 18.04(不用docker推荐使用centos装)

docker --version
Docker version 18.09.7, build 2d0083d

0x00 环境搭建

1. Docker

1
2
3
apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker

docker代理

1 sudo mkdir -p /etc/systemd/system/docker.service.d

在服务目录下新建代理配置文件并添加内容
NO_PROXY是不需要代理的地址,比如本地及本地私有仓库等

1
2
3
vi /etc/systemd/system/docker.service.d/http-proxy.conf

[Service] Environment="HTTP_PROXY=https://ip:port/"

2. portainer

docker可视化工具portainer
https://www.portainer.io/installation/

1
2
$ docker volume create portainer_data
$ docker run -d -p 8000:8000 -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v

3. elasticsearch

1 docker pull elasticsearch:7.2.0

注:7.2没有type概念,以索引为主,开发者认为不应该把elasticsearch当作单纯数据库看待

访问可视化工具->volumns模块
创建本地卷永久存放elasticsearch日志,数据,配置目录:

3.1 设置卷映射

/usr/share/elasticsearch/data -> es_data
/usr/share/elasticsearch/config -> es_config

3.2 设置端口映射

3.3 设置单机模式(参考hub)

https://hub.docker.com/_/elasticsearch

1 discovery.type=single-node

3.4 修改elasticsearch配置文件

修改完后启动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
/var/lib/docker/volumes/es_config/_data#
cat elasticsearch.yml
cluster.name: "docker-cluster"

# 设置局域网可外连
network.host: 0.0.0.0

# 设置写入缓存清理和限制
indices.fielddata.cache.size: 75%
indices.breaker.fielddata.limit: 85%

# 设置外连否则es会拒绝跨域和一些不允许方法
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: X-Requested-With, Content-Type, Content-Length, X-User

3.5 测试访问

3.6 安装中文ik分词插件

1
2
3
4
cd /usr/share/elasticsearch/plugins
mkdir ik
curl -O https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.2.0/elasticsearch-ik-7.2.0.zip
docker restart [docker-id]

4. kibana搭建

1
2
3
4
docker pull kibana:7.2.0

port map :5601
/usr/share/kibana/config -> kibana_config(本地卷)

4.1 修改配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
/var/lib/docker/volumes/kibana_config/_data# cat kibana.yml
#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
# 配置host
elasticsearch.hosts: [ "https://192.168.123.135:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true

# 设置日志存放
logging.dest: /usr/share/kibana/config/kibana-log.txt

4.2 测试启动

0X04 es常用语句

以下语句均为7.2环境

1.1 新增索引

1
2
3
4
5

6
7
8
PUT test1
{
"mappings" : {
"properties" : {
"field1" : { "type" : "text" }
}
}
}

1.2 新增分词索引

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
PUT data1
{
"settings":{
"analysis":{
"analyzer":{
"email_analyzer":{
"tokenizer":"standard",
"filter":[
"lowercase"
]
}
}
}
},
"mappings" : {
"properties" : {
"username" : {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_smart"
},
"email":{
"type": "text",
"analyzer": "email_analyzer",
"search_analyzer": "email_analyzer"
},
"sex":{
"type": "keyword"
},
"address" : {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_smart"
},
}
}
}

1.3 查看索引

1 https://10.10.10.10:9200/_cat/indices

1.4 查看数据

查看test1索引下序号为1的数据

1 GET test1/_doc/1

1.5 搜索数据

1 https://10.10.10.10:9200/hello/_search?pretty&size=50&from=50

1.6 范围删除

删除data1索引下_seq_no序号大于等于50的数据

1
2
3
4
5
6
7
8
9
10
POST data1/_delete_by_query
{
"query": {
"range" : {
"_seq_no" : {
"gte" : 50
}
}
}
}

1.7 group by查询

检索处所有source字段

1
2
3
4
5
6
7
8
9
10
GET data1/_search
{
"aggs":{
"models":{
"terms":{
"field":"source"
}
}
}
}

0X05 es常用语句

Bulk 批量插入

Mysql To Elasticsearch

1 https://blog.csdn.net/weixin_39198406/article/details/82983256

0X06 异常报错

1 ELASTICSEARCH CIRCUITBREAKINGEXCEPTION FIELDDATA DATA TOO LARGE

尝试添加文章中配置文件限制cache的配置
其次top看一下内存占用,应该是不够用了。

如无特殊说明,均为原创内容。转载请注明出处!