4

I am using ELK and have the following document structure

 {
  "_index": "prod1-db.log-*",
  "_type": "db.log",
  "_id": "AVadEaq7",
  "_score": null,
  "_source": {
    "message": "2016-07-08T12:52:42.026+0000 I NETWORK  [conn4928242] end connection 192.168.170.62:47530 (31 connections now open)",
    "@version": "1",
    "@timestamp": "2016-08-18T09:50:54.247Z",
    "type": "log",
    "input_type": "log",
    "count": 1,
    "beat": {
      "hostname": "prod1",
      "name": "prod1"
    },
    "offset": 1421607236,
    "source": "/var/log/db/db.log",
    "fields": null,
    "host": "prod1",
    "tags": [
      "beats_input_codec_plain_applied"
    ]
  },
  "fields": {
    "@timestamp": [
      1471513854247
    ]
  },
  "sort": [
    1471513854247
  ]
} 

I want to change the message field to not_analyzed. I am wondering how to use Elasticsedarch Mapping API to achieve that? For example, how to use PUT Mapping API to add a new type to the existing index?

I am using Kibana 4.5 and Elasticsearch 2.3.

UPDATE Tried the following template.json in logstash,

 1 {
 2   "template": "logstash-*",
 3   "mappings": {
 4     "_default_": {
 5       "properties": {
 6         "message" : {
 7           "type" : "string",
 8           "index" : "not_analyzed"
 9         }
10       }
11     }
12   }
13 }

got the following errors when starting logstash,

logstash_1       | {:timestamp=>"2016-08-24T11:00:26.097000+0000", :message=>"Invalid setting for elasticsearch output plugin:\n\n  output {\n    elasticsearch {\n      # This setting must be a path\n      # File does not exist or cannot be opened /home/dw/docker-elk/logstash/core_mapping_template.json\n      template => \"/home/dw/docker-elk/logstash/core_mapping_template.json\"\n      ...\n    }\n  }", :level=>:error}
logstash_1       | {:timestamp=>"2016-08-24T11:00:26.153000+0000", :message=>"Pipeline aborted due to error", :exception=>#<LogStash::ConfigurationError: Something is wrong with your configuration.>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/config/mixin.rb:134:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/outputs/base.rb:63:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/agent.rb:473:in `start_pipeline'"], :level=>:error}
logstash_1       | {:timestamp=>"2016-08-24T11:00:29.168000+0000", :message=>"stopping pipeline", :id=>"main"}

2 Answers 2

6

You can't change the mapping of an index when it already exists, except when you create new fields to Objects or multi-fields.

If you want to use the Mapping API for that your request would look like this:

PUT /prod1-db.log-*/_mapping/log
{
  "properties": {
    "message": {
      "type": "string",
      "index": "not_analyzed"
    }
  }
}

However I would recommend you create a JSON file with your mappings and add it to your logstash config.

A template file might look like this (You need to customize this):

{
  "template": "logstash-*",
  "mappings": {
    "_default_": {
      "properties": {
        "action" : {
          "type" : "string",
          "fields" : {
            "raw" : {
              "index" : "not_analyzed",
              "type" : "string"
            }
          }
        },
        "ad_domain" : {
          "type" : "string"
        },
        "auth" : {
          "type" : "long"
        },
        "authtime" : {
          "type" : "long"
        },
        "avscantime" : {
          "type" : "long"
        },
        "cached" : {
          "type" : "boolean"
        }
      }
    }
  }
}

And the elasticsearch entry in your Logstash config looks like this:

elasticsearch {
    template => "/etc/logstash/template/template.json"
    template_overwrite => true
}
Sign up to request clarification or add additional context in comments.

10 Comments

tried PUT /prod1-db.log-*/_mapping/log { "properties": { "message": { "type": "string", "index": "not_analyzed" } } } but got an error from elasticsearch, ` java.lang.IllegalArgumentException: invalid version format: {"PROPERTIES": {"MESSAGE": {"TYPE": "STRING", "INDEX": "NOT_ANALYZED"}}} HTTP/1.1`
@daiyue Have you recreated the index?
what do you mean by recreating the index? How to do that in combination of adding a mapping?
@daiyue Remapping an existing index is not possible (except some expections). Mapping only applies to an index which is being created. I would strongly recommend you go the route of using a template file beccause you then don't have to deal with curl and can edit changes really easily.
Yes. It defines which index it should apply the mapping to.
|
3

If at all you haven't specified any mappings for your fields while index creation, the first time you index a document into your index, elastic search automatically chooses the best mapping for each of the fields based on the data provided. Looking at the document you have provided in the question, elasticsearch would have already assigned an analyser for the field message. Once its assigned you cannot change it. Only way to do that is to create a fresh index.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.