Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple docker setup with 2 pipelines but one pipeline constantly fails to start (Expected one of input, filter, output) #16793

Closed
mkocent opened this issue Dec 13, 2024 · 5 comments

Comments

@mkocent
Copy link

mkocent commented Dec 13, 2024

Version: Logstah 8.15.3
Env: Running in Docker on Windows and DockerCE on Linux Mint
Plugins: No additional Plugins installed, just the clean image from docker.elastic.co
OS: Linux logstash 6.6.12-linuxkit #1 SMP PREEMPT_DYNAMIC Fri Jan 19 12:50:23 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Description of the problem including expected versus actual behavior:
When define two pipelines in Logstash, one does not start and Logtsash throws the error:

Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:logfiles2, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:239:in `initialize'", "org/logstash/execution/AbstractPipelineExt.java:173:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in `initialize'", "org/jruby/RubyClass.java:949:in `new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in `block in converge_state'"]}

Only one pipeline works and even if i remove the working one or remove the not-working, its allways pipeline1 that works and pipeline 2 that doesnt.
Also copy the pipeline content from working to non-working does not make a difference.
File Encoding is set to UTF-8 and endchar can be CRLF or LF, does not make a difference.
Changes in the working pipeline or the pipelines.yml are applied correctly after container restart, so no problem here.

Tried this with Docker Desktop on Windows and DockerCE on LinuxMint.

Steps to reproduce:

docker-compose.yml

  logstash:
    image: docker.elastic.co/logstash/logstash:8.15.3
    hostname: logstash
    container_name: logstash_container
    user: root
    ports:
      - "5002:5002"
    volumes:
      - ./config-logstash/pipelines.yml:/usr/share/logstash/config/pipelines.yml
      - ./config-logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./config-logstash/logfiles1.conf:/usr/share/logstash/pipeline/logfiles1.conf
      - ./config-logstash/logfiles2.conf:/usr/share/logstash/pipeline/logfiles2.conf
    networks:
      - elastic
    depends_on:
      - elasticsearch

pipelines.yml

- pipeline.id: logfiles1
  path.config: "/usr/share/logstash/pipeline/logfiles1.conf"
- pipeline.id: logfiles2
  path.config: "/usr/share/logstash/pipeline/logfiles2.conf"

logstash.yml

http.host: "0.0.0.0"

logfiles1.comf

input {
    http {
        port => 5002
        codec => "json"
    }
}

filter {
}

output {
    elasticsearch {
        hosts => ["http://elasticsearch:9200"]
        index => "someindex"
        data_stream => false
    }
    stdout { codec => rubydebug }
}

logfiles2.conf (just a copy of logfiles1.conf but with different port

input {
    http {
        port => 5003
        codec => "json"
    }
}

filter {
}

output {
    elasticsearch {
        hosts => ["http://elasticsearch:9200"]
        index => "someindex"
        data_stream => false
    }
    stdout { codec => rubydebug }
}

Logs where pipeline1.conf works but pipeline2.conf does not

Using bundled JDK: /usr/share/logstash/jdk
2024-12-13T07:54:21.509432138Z Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
2024-12-13T07:54:21.600256708Z [2024-12-13T07:54:21,599][WARN ][deprecation.logstash.settings] The setting `http.host` is a deprecated alias for `api.http.host` and will be removed in a future release of Logstash. Please use api.http.host instead
2024-12-13T07:54:21.604309332Z [2024-12-13T07:54:21,604][WARN ][deprecation.logstash.runner] NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
2024-12-13T07:54:21.608522737Z [2024-12-13T07:54:21,608][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
2024-12-13T07:54:21.609897903Z [2024-12-13T07:54:21,609][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.15.3", "jruby.version"=>"jruby 9.4.8.0 (3.1.4) 2024-07-02 4d41e55a67 OpenJDK 64-Bit Server VM 21.0.4+7-LTS on 21.0.4+7-LTS +indy +jit [x86_64-linux]"}
2024-12-13T07:54:21.611205090Z [2024-12-13T07:54:21,610][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
2024-12-13T07:54:21.612920639Z [2024-12-13T07:54:21,612][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
2024-12-13T07:54:21.613044008Z [2024-12-13T07:54:21,612][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
2024-12-13T07:54:21.617715353Z [2024-12-13T07:54:21,617][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
2024-12-13T07:54:21.618944442Z [2024-12-13T07:54:21,618][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
2024-12-13T07:54:21.753290568Z [2024-12-13T07:54:21,752][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"282bd396-4081-402d-9071-0cc3146e3021", :path=>"/usr/share/logstash/data/uuid"}
2024-12-13T07:54:22.165887710Z [2024-12-13T07:54:22,165][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
2024-12-13T07:54:22.171678283Z [2024-12-13T07:54:22,170][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:logfiles2, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:239:in `initialize'", "org/logstash/execution/AbstractPipelineExt.java:173:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in `initialize'", "org/jruby/RubyClass.java:949:in `new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in `block in converge_state'"]}
2024-12-13T07:54:22.408527083Z [2024-12-13T07:54:22,408][INFO ][org.reflections.Reflections] Reflections took 107 ms to scan 1 urls, producing 138 keys and 481 values
2024-12-13T07:54:22.522971650Z [2024-12-13T07:54:22,522][INFO ][logstash.codecs.json     ] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
2024-12-13T07:54:22.677538665Z [2024-12-13T07:54:22,677][INFO ][logstash.javapipeline    ] Pipeline `logfiles1` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
2024-12-13T07:54:22.725678975Z [2024-12-13T07:54:22,722][INFO ][logstash.outputs.elasticsearch][logfiles1] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
2024-12-13T07:54:22.803794637Z [2024-12-13T07:54:22,803][INFO ][logstash.outputs.elasticsearch][logfiles1] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
2024-12-13T07:54:22.828689530Z [2024-12-13T07:54:22,828][INFO ][logstash.outputs.elasticsearch][logfiles1] Failed to perform request {:message=>"Connect to elasticsearch:9200 [elasticsearch/172.23.0.2] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to elasticsearch:9200 [elasticsearch/172.23.0.2] failed: Connection refused>}
2024-12-13T07:54:22.829577741Z [2024-12-13T07:54:22,829][WARN ][logstash.outputs.elasticsearch][logfiles1] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connect to elasticsearch:9200 [elasticsearch/172.23.0.2] failed: Connection refused"}
2024-12-13T07:54:22.841062326Z [2024-12-13T07:54:22,840][INFO ][logstash.javapipeline    ][logfiles1] Starting pipeline {:pipeline_id=>"logfiles1", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logfiles1.conf"], :thread=>"#<Thread:0x1e85df8d /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
2024-12-13T07:54:23.416022066Z [2024-12-13T07:54:23,415][INFO ][logstash.javapipeline    ][logfiles1] Pipeline Java execution initialization time {"seconds"=>0.57}
2024-12-13T07:54:23.430060108Z [2024-12-13T07:54:23,429][INFO ][logstash.codecs.json     ][logfiles1] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
2024-12-13T07:54:23.492576142Z [2024-12-13T07:54:23,492][INFO ][logstash.javapipeline    ][logfiles1] Pipeline started {"pipeline.id"=>"logfiles1"}
2024-12-13T07:54:23.494519844Z [2024-12-13T07:54:23,494][INFO ][logstash.inputs.http     ][logfiles1][80f49a02c2e88ad6f47e974fff02c1f03abe8f8ab8cad1b00cccbe18e6ca5831] Starting http input listener {:address=>"0.0.0.0:5002", :ssl=>"false"}
2024-12-13T07:54:27.861809198Z [2024-12-13T07:54:27,861][WARN ][logstash.outputs.elasticsearch][logfiles1] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
2024-12-13T07:54:27.874243756Z [2024-12-13T07:54:27,873][INFO ][logstash.outputs.elasticsearch][logfiles1] Elasticsearch version determined (8.15.3) {:es_version=>8}
2024-12-13T07:54:27.874271233Z [2024-12-13T07:54:27,874][WARN ][logstash.outputs.elasticsearch][logfiles1] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
2024-12-13T07:54:28.839165450Z [2024-12-13T07:54:28,838][INFO ][logstash.outputs.elasticsearch][logfiles1] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
2024-12-13T07:54:28.859442544Z [2024-12-13T07:54:28,859][INFO ][logstash.outputs.elasticsearch][logfiles1] Installing Elasticsearch template {:name=>"ecs-logstash"}

Content of folder /usr/share/logstash/pipeline/. Note, the logstash.conf file is autogenerated every time which is not what i want

drwxr-sr-x 1 logstash root 4096 Dec 13 07:54 .
drwxrwsr-x 1 logstash root 4096 Oct  8 15:42 ..
-rwxr-xr-x 1 root     root  261 Dec 13 07:53 logfiles1.conf
-rwxr-xr-x 1 root     root  264 Dec 13 07:53 logfiles2.conf
-rw-r--r-- 1 logstash root   92 Oct  8 15:28 logstash.conf

@mkocent
Copy link
Author

mkocent commented Dec 13, 2024

Sidenote:
As mentioned in 1503:
I also mounted both pipeline config files into seperate folders for each and configure the pipeline.yml accordingly. Makes no difference.

docker-compose.yml

    volumes:
      - ./config-logstash/logfiles1.conf:/usr/share/logstash/mypipeline1/logfiles1.conf
      - ./config-logstash/logfiles2.conf:/usr/share/logstash/mypipeline2/logfiles2.conf

pipelines.yml

- pipeline.id: logfiles1
  path.config: "/usr/share/logstash/mypipeline1/logfiles1.conf"
- pipeline.id: logfiles2
  path.config: "/usr/share/logstash/mypipeline2/logfiles2.conf"

Content of folder /usr/share/logstash/mypipeline1

root@logstash:/usr/share/logstash/mypipeline1# ls -la
total 13
drwxr-sr-x 2 root     root 4096 Dec 13 08:26 .
drwxrwsr-x 1 logstash root 4096 Dec 13 08:26 ..
-rwxr-xr-x 1 root     root  261 Dec 13 07:53 logfiles1.conf

Content of folder /usr/share/logstash/mypipeline2

root@logstash:/usr/share/logstash/mypipeline2# ls -la
total 13
drwxr-sr-x 2 root     root 4096 Dec 13 08:26 .
drwxrwsr-x 1 logstash root 4096 Dec 13 08:26 ..
-rwxr-xr-x 1 root     root  264 Dec 13 07:53 logfiles2.conf

Logfile

[2024-12-13T08:26:52,750][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:logfiles2, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:239:in `initialize'", "org/logstash/execution/AbstractPipelineExt.java:173:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in `initialize'", "org/jruby/RubyClass.java:949:in `new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in `block in converge_state'"]}
...
[2024-12-13T08:26:53,299][INFO ][logstash.javapipeline    ][logfiles1] Starting pipeline {:pipeline_id=>"logfiles1", "pipeline.workers"=>16, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2000, "pipeline.sources"=>["/usr/share/logstash/mypipeline1/logfiles1.conf"], :thread=>"#<Thread:0x1167dfff /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}

@mkocent
Copy link
Author

mkocent commented Dec 13, 2024

Addition:
I created a custom image with dos2unix to make sure my pipeline files are in linux file format but it made no difference.

FROM docker.elastic.co/logstash/logstash:8.15.3
USER root
RUN apt-get update && \
    apt-get install -y dos2unix && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* \
    
COPY ./logfiles1.conf/ /usr/share/logstash/pipeline/
COPY ./logfiles2.conf/ /usr/share/logstash/pipeline/

RUN find /usr/share/logstash/pipeline/ -type f -exec dos2unix {} +
RUN chown -R logstash:logstash /usr/share/logstash/pipeline/

USER logstash
CMD ["logstash"]

@robbavey
Copy link
Member

Can you add the following lines to logstash.yml:

config.debug: true
log.level: debug

And send the output? I can't reproduce the issue locally, but I'd like to know what Logstash thinks your configs look like

@robbavey robbavey self-assigned this Dec 16, 2024
@mkocent
Copy link
Author

mkocent commented Dec 17, 2024

Thank for your reply.
Changed my logstash.yml to this:

http.host: "0.0.0.0"
config.debug: true
log.level: debug

Here are the Logs:

logstash-log.txt

@mkocent
Copy link
Author

mkocent commented Dec 19, 2024

Seems to be a strange Azure GIT problem.
When i copy the working pipeline file twice, one for my second pipeline and one for pipelines.yml, then it works.

docker-compose.yml

  logstash:
    image: docker.elastic.co/logstash/logstash:8.15.3
    hostname: logstash
    container_name: logstash_container
    user: root
    ports:
      - "5002:5002"
      - "5003:5003"
    volumes:
      - ./config-logstash/pipelines.yml:/usr/share/logstash/config/pipelines.yml
      - ./config-logstash/logfiles1.conf:/usr/share/logstash/my-pipelines/pipeline1/logfiles1.conf
      - ./config-logstash/logfiles2.conf:/usr/share/logstash/my-pipelines/pipeline2/logfiles2.conf
#      - ./data/logstash:/usr/share/logstash/data
    networks:
      - elastic
    depends_on:
      - elasticsearch

pipelines.yml

- pipeline.id: pipeline1
  path.config: "usr/share/logstash/my-pipelines/pipeline1/logfiles1.conf"
- pipeline.id: pipeline2
  path.config: "/usr/share/logstash/my-pipelines/pipeline2/logfiles2.conf"        

logfiles2.conf created by copying logfiles1.conf.

@mkocent mkocent closed this as completed Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants