Sunday, April 21, 2024

Configure ELK Stack on Fedora

  • Information about ELK can be found on https://springimplant.blogspot.com/p/elk-stack.html
  • Installing ELK on Fedora
    • Install Java
    • Add ELK to stack repository 
      • cat <<EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
      • [elasticsearch-8.x]
      • name=Elasticsearch repository for 8.x packages
      • baseurl=https://artifacts.elastic.co/packages/8.x/yum
      • gpgcheck=1
      • gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      • enabled=1
      • autorefresh=1
      • type=rpm-md
      • EOF
    • Import GPG Key
      • sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    • Install elasticsearch
      • sudo yum -y install vim elasticsearch
    • Enable Service
      • sudo systemctl enable --now elasticsearch.service
    • Check Elastic
      • sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200
    • Create a test index:
      • sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -X PUT "https://localhost:9200/mytest_index" -u elastic
    • Configure elastic user pasword
      • sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic -i
    • Store Password in Envirnoment Shell
      • export ELASTIC_PASSWORD="your_password"
    • Generate new Enrollment Token
      • /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
    • On your new Elasticsearch node, pass the enrollment token as a parameter to the elasticsearch-reconfigure-node tool
      • /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <enrollment-token>
    • Install Kibana
      • sudo yum -y install kibana
    • Start Kibana Status
      • sudo systemctl restart kibana.service
    • Reload systemd manager configuration
      • sudo /bin/systemctl daemon-reload
    • Don't change any settings now make sure elasticsearch is running and disable other servers like nginx etc.
    • Check the kibana url.
      • http://localhost:5601/
    • Generate Enrollment Token for Kibana
      • sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
    • Paste Token in Kibana url http://localhost:5601/ then hit “Configure Elastic”.A verification code is generated. Run the command below to retrieve the code.
      • sudo /usr/share/kibana/bin/kibana-verification-code
    • Type the code from command output.Let the configuration to complete and Authenticate with elastic as username and password configured earlier.
    • Install Logstash
      • sudo yum -y install logstash
    • Install other ELK tools
      • sudo yum install filebeat auditbeat metricbeat packetbeat heartbeat-elastic
  • ELK Configuration File Changes
    • /etc/elasticsearch/elasticsearch.yml
      •  # ======================== Elasticsearch Configuration =========================  
         #  
         # NOTE: Elasticsearch comes with reasonable defaults for most settings.  
         #    Before you set out to tweak and tune the configuration, make sure you  
         #    understand what are you trying to accomplish and the consequences.  
         #  
         # The primary way of configuring a node is via this file. This template lists  
         # the most important settings you may want to configure for a production cluster.  
         #  
         # Please consult the documentation for further information on configuration options:  
         # https://www.elastic.co/guide/en/elasticsearch/reference/index.html  
         #  
         # ---------------------------------- Cluster -----------------------------------  
         #  
         # Use a descriptive name for your cluster:  
         #  
         cluster.name: my-application  
         #  
         # ------------------------------------ Node ------------------------------------  
         #  
         # Use a descriptive name for the node:  
         #  
         node.name: node-1  
         #  
         # Add custom attributes to the node:  
         #  
         #node.attr.rack: r1  
         #  
         # ----------------------------------- Paths ------------------------------------  
         #  
         # Path to directory where to store the data (separate multiple locations by comma):  
         #  
         path.data: /var/lib/elasticsearch  
         #  
         # Path to log files:  
         #  
         path.logs: /var/log/elasticsearch  
         #  
         # ----------------------------------- Memory -----------------------------------  
         #  
         # Lock the memory on startup:  
         #  
         #bootstrap.memory_lock: true  
         #  
         # Make sure that the heap size is set to about half the memory available  
         # on the system and that the owner of the process is allowed to use this  
         # limit.  
         #  
         # Elasticsearch performs poorly when the system is swapping the memory.  
         #  
         # ---------------------------------- Network -----------------------------------  
         #  
         # By default Elasticsearch is only accessible on localhost. Set a different  
         # address here to expose this node on the network:  
         #  
         network.host: 0.0.0.0  
         #  
         # By default Elasticsearch listens for HTTP traffic on the first free port it  
         # finds starting at 9200. Set a specific HTTP port here:  
         #  
         http.port: 9200  
         #  
         # For more information, consult the network module documentation.  
         #  
         # --------------------------------- Discovery ----------------------------------  
         #  
         # Pass an initial list of hosts to perform discovery when this node is started:  
         # The default list of hosts is ["127.0.0.1", "[::1]"]  
         #  
         discovery.seed_hosts: []  
         #  
         # Bootstrap the cluster using an initial set of master-eligible nodes:  
         #  
         #cluster.initial_master_nodes: ["node-1", "node-2"]  
         #  
         # For more information, consult the discovery and cluster formation module documentation.  
         #  
         # ---------------------------------- Various -----------------------------------  
         #  
         # Allow wildcard deletion of indices:  
         #  
         #action.destructive_requires_name: false  
         #----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------  
         #  
         # The following settings, TLS certificates, and keys have been automatically     
         # generated to configure Elasticsearch security features on 11-05-2024 16:26:37  
         #  
         # --------------------------------------------------------------------------------  
         # Enable security features  
         xpack.security.enabled: false  
         xpack.security.enrollment.enabled: true  
         # Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents  
         xpack.security.http.ssl:  
          enabled: true  
          keystore.path: certs/http.p12  
         # Enable encryption and mutual authentication between cluster nodes  
         xpack.security.transport.ssl:  
          enabled: true  
          verification_mode: certificate  
          keystore.path: certs/transport.p12  
          truststore.path: certs/transport.p12  
         # Create a new cluster with the current node only  
         # Additional nodes can still join the cluster later  
         cluster.initial_master_nodes: ["springimplant-HP-Notebook"]  
         # Allow HTTP API connections from anywhere  
         # Connections are encrypted and require user authentication  
         http.host: 0.0.0.0  
         # Allow other nodes to join the cluster from anywhere  
         # Connections are encrypted and mutually authenticated  
         #transport.host: 0.0.0.0  
         #----------------------- END SECURITY AUTO CONFIGURATION -------------------------  
    • /etc/kibana/kibana.yml
      •  # For more configuration options see the configuration guide for Kibana in  
         # https://www.elastic.co/guide/index.html  
         # =================== System: Kibana Server ===================  
         # Kibana is served by a back end server. This setting specifies the port to use.  
         server.port: 5601  
         # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.  
         # The default is 'localhost', which usually means remote machines will not be able to connect.  
         # To allow connections from remote users, set this parameter to a non-loopback address.  
         server.host: 0.0.0.0  
         # Enables you to specify a path to mount Kibana at if you are running behind a proxy.  
         # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath  
         # from requests it receives, and to prevent a deprecation warning at startup.  
         # This setting cannot end in a slash.  
         #server.basePath: ""  
         # Specifies whether Kibana should rewrite requests that are prefixed with  
         # `server.basePath` or require that they are rewritten by your reverse proxy.  
         # Defaults to `false`.  
         #server.rewriteBasePath: false  
         # Specifies the public URL at which Kibana is available for end users. If  
         # `server.basePath` is configured this URL should end with the same basePath.  
         #server.publicBaseUrl: ""  
         # The maximum payload size in bytes for incoming server requests.  
         #server.maxPayload: 1048576  
         # The Kibana server's name. This is used for display purposes.  
         #server.name: "your-hostname"  
         # =================== System: Kibana Server (Optional) ===================  
         # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.  
         # These settings enable SSL for outgoing requests from the Kibana server to the browser.  
         #server.ssl.enabled: false  
         #server.ssl.certificate: /path/to/your/server.crt  
         #server.ssl.key: /path/to/your/server.key  
         # =================== System: Elasticsearch ===================  
         # The URLs of the Elasticsearch instances to use for all your queries.  
         elasticsearch.hosts: ["http://localhost:9200"]  
         # If your Elasticsearch is protected with basic authentication, these settings provide  
         # the username and password that the Kibana server uses to perform maintenance on the Kibana  
         # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which  
         # is proxied through the Kibana server.  
         #elasticsearch.username: "kibana_system"  
         #elasticsearch.password: "pass"  
         # Kibana can also authenticate to Elasticsearch via "service account tokens".  
         # Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.  
         # Use this token instead of a username/password.  
         # elasticsearch.serviceAccountToken: "my_token"  
         # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of  
         # the elasticsearch.requestTimeout setting.  
         #elasticsearch.pingTimeout: 1500  
         # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value  
         # must be a positive integer.  
         #elasticsearch.requestTimeout: 30000  
         # The maximum number of sockets that can be used for communications with elasticsearch.  
         # Defaults to `Infinity`.  
         #elasticsearch.maxSockets: 1024  
         # Specifies whether Kibana should use compression for communications with elasticsearch  
         # Defaults to `false`.  
         #elasticsearch.compression: false  
         # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side  
         # headers, set this value to [] (an empty list).  
         #elasticsearch.requestHeadersWhitelist: [ authorization ]  
         # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten  
         # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.  
         #elasticsearch.customHeaders: {}  
         # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.  
         #elasticsearch.shardTimeout: 30000  
         # =================== System: Elasticsearch (Optional) ===================  
         # These files are used to verify the identity of Kibana to Elasticsearch and are required when  
         # xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.  
         #elasticsearch.ssl.certificate: /path/to/your/client.crt  
         #elasticsearch.ssl.key: /path/to/your/client.key  
         # Enables you to specify a path to the PEM file for the certificate  
         # authority for your Elasticsearch instance.  
         #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]  
         # To disregard the validity of SSL certificates, change this setting's value to 'none'.  
         #elasticsearch.ssl.verificationMode: full  
         # =================== System: Logging ===================  
         # Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'  
         #logging.root.level: debug  
         # Enables you to specify a file where Kibana stores log output.  
         logging:  
          appenders:  
           file:  
            type: file  
            fileName: /var/log/kibana/kibana.log  
            layout:  
             type: json  
          root:  
           appenders:  
            - default  
            - file  
         # policy:  
         #  type: size-limit  
         #  size: 256mb  
         # strategy:  
         #  type: numeric  
         #  max: 10  
         # layout:  
         #  type: json  
         # Logs queries sent to Elasticsearch.  
         #logging.loggers:  
         # - name: elasticsearch.query  
         #  level: debug  
         # Logs http responses.  
         #logging.loggers:  
         # - name: http.server.response  
         #  level: debug  
         # Logs system usage information.  
         #logging.loggers:  
         # - name: metrics.ops  
         #  level: debug  
         # Enables debug logging on the browser (dev console)  
         #logging.browser.root:  
         # level: debug  
         # =================== System: Other ===================  
         # The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data  
         #path.data: data  
         # Specifies the path where Kibana creates the process ID file.  
         pid.file: /run/kibana/kibana.pid  
         # Set the interval in milliseconds to sample system and process performance  
         # metrics. Minimum is 100ms. Defaults to 5000ms.  
         #ops.interval: 5000  
         # Specifies locale to be used for all localizable strings, dates and number formats.  
         # Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".  
         #i18n.locale: "en"  
         # =================== Frequently used (Optional)===================  
         # =================== Saved Objects: Migrations ===================  
         # Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.  
         # The number of documents migrated at a time.  
         # If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,  
         # use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.  
         #migrations.batchSize: 1000  
         # The maximum payload size for indexing batches of upgraded saved objects.  
         # To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.  
         # This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`  
         # configuration option. Default: 100mb  
         #migrations.maxBatchSizeBytes: 100mb  
         # The number of times to retry temporary migration failures. Increase the setting  
         # if migrations fail frequently with a message such as `Unable to complete the [...] step after  
         # 15 attempts, terminating`. Defaults to 15  
         #migrations.retryAttempts: 15  
         # =================== Search Autocomplete ===================  
         # Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.  
         # This value must be a whole number greater than zero. Defaults to 1000ms  
         #unifiedSearch.autocomplete.valueSuggestions.timeout: 1000  
         # Maximum number of documents loaded by each shard to generate autocomplete suggestions.  
         # This value must be a whole number greater than zero. Defaults to 100_000  
         #unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000  
         # This section was automatically generated during setup.  
         elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MTU0NDY3MzU0MzA6d0ZXRmhPZ3JRWC00MlM1emgtcWM5dw  
         elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1715446736140.crt]  
         xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://192.168.1.16:9200'], ca_trusted_fingerprint: 53ea8ad754c940e9b1e7580e87883a57a73813508acc3eb0139e0af8786380d4}]  
    • /etc/filebeat/filebeat.yml
      •  ###################### Filebeat Configuration Example #########################  
         # This file is an example configuration file highlighting only the most common  
         # options. The filebeat.reference.yml file from the same directory contains all the  
         # supported options with more comments. You can use it as a reference.  
         #  
         # You can find the full configuration reference here:  
         # https://www.elastic.co/guide/en/beats/filebeat/index.html  
         # For more available modules and options, please see the filebeat.reference.yml sample  
         # configuration file.  
         # ============================== Filebeat inputs ===============================  
         filebeat.inputs:  
         # Each - is an input. Most options can be set at the input level, so  
         # you can use different inputs for various configurations.  
         # Below are the input-specific configurations.  
         # filestream is an input for collecting log messages from files.  
         - type: filestream  
          # Unique ID among all inputs, an ID is required.  
          id: my-filestream-id  
          # Change to true to enable this input configuration.  
          enabled: false  
          # Paths that should be crawled and fetched. Glob based paths.  
          paths:  
           - /var/log/*.log  
           #- c:\programdata\elasticsearch\logs\*  
          # Exclude lines. A list of regular expressions to match. It drops the lines that are  
          # matching any regular expression from the list.  
          # Line filtering happens after the parsers pipeline. If you would like to filter lines  
          # before parsers, use include_message parser.  
          #exclude_lines: ['^DBG']  
          # Include lines. A list of regular expressions to match. It exports the lines that are  
          # matching any regular expression from the list.  
          # Line filtering happens after the parsers pipeline. If you would like to filter lines  
          # before parsers, use include_message parser.  
          #include_lines: ['^ERR', '^WARN']  
          # Exclude files. A list of regular expressions to match. Filebeat drops the files that  
          # are matching any regular expression from the list. By default, no files are dropped.  
          #prospector.scanner.exclude_files: ['.gz$']  
          # Optional additional fields. These fields can be freely picked  
          # to add additional information to the crawled log files for filtering  
          #fields:  
          # level: debug  
          # review: 1  
         # ============================== Filebeat modules ==============================  
         filebeat.config.modules:  
          # Glob pattern for configuration loading  
          path: ${path.config}/modules.d/*.yml  
          # Set to true to enable config reloading  
          reload.enabled: false  
          # Period on which files under path should be checked for changes  
          #reload.period: 10s  
         # ======================= Elasticsearch template setting =======================  
         setup.template.settings:  
          index.number_of_shards: 1  
          #index.codec: best_compression  
          #_source.enabled: false  
         # ================================== General ===================================  
         # The name of the shipper that publishes the network data. It can be used to group  
         # all the transactions sent by a single shipper in the web interface.  
         #name:  
         # The tags of the shipper are included in their field with each  
         # transaction published.  
         #tags: ["service-X", "web-tier"]  
         # Optional fields that you can specify to add additional information to the  
         # output.  
         #fields:  
         # env: staging  
         # ================================= Dashboards =================================  
         # These settings control loading the sample dashboards to the Kibana index. Loading  
         # the dashboards is disabled by default and can be enabled either by setting the  
         # options here or by using the `setup` command.  
         #setup.dashboards.enabled: false  
         # The URL from where to download the dashboard archive. By default, this URL  
         # has a value that is computed based on the Beat name and version. For released  
         # versions, this URL points to the dashboard archive on the artifacts.elastic.co  
         # website.  
         #setup.dashboards.url:  
         # =================================== Kibana ===================================  
         # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.  
         # This requires a Kibana endpoint configuration.  
         setup.kibana:  
          # Kibana Host  
          # Scheme and port can be left out and will be set to the default (http and 5601)  
          # In case you specify and additional path, the scheme is required: http://localhost:5601/path  
          # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601  
          #host: "localhost:5601"  
          # Kibana Space ID  
          # ID of the Kibana Space into which the dashboards should be loaded. By default,  
          # the Default Space will be used.  
          #space.id:  
         # =============================== Elastic Cloud ================================  
         # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).  
         # The cloud.id setting overwrites the `output.elasticsearch.hosts` and  
         # `setup.kibana.host` options.  
         # You can find the `cloud.id` in the Elastic Cloud web UI.  
         #cloud.id:  
         # The cloud.auth setting overwrites the `output.elasticsearch.username` and  
         # `output.elasticsearch.password` settings. The format is `<user>:<pass>`.  
         #cloud.auth:  
         # ================================== Outputs ===================================  
         # Configure what output to use when sending the data collected by the beat.  
         # ---------------------------- Elasticsearch Output ----------------------------  
         # output.elasticsearch:  
          # Array of hosts to connect to.  
         # hosts: ["localhost:9200"]  
          # Performance preset - one of "balanced", "throughput", "scale",  
          # "latency", or "custom".  
          preset: balanced  
          # Protocol - either `http` (default) or `https`.  
          #protocol: "https"  
          # Authentication credentials - either API key or username/password.  
          #api_key: "id:api_key"  
          #username: "elastic"  
          #password: "changeme"  
         # ------------------------------ Logstash Output -------------------------------  
         output.logstash:  
          # The Logstash hosts  
          hosts: ["0.0.0.0:5044"]  
          # Optional SSL. By default is off.  
          # List of root certificates for HTTPS server verifications  
          #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]  
          # Certificate for SSL client authentication  
          #ssl.certificate: "/etc/pki/client/cert.pem"  
          # Client Certificate Key  
          #ssl.key: "/etc/pki/client/cert.key"  
         # ================================= Processors =================================  
         processors:  
          - add_host_metadata:  
            when.not.contains.tags: forwarded  
          - add_cloud_metadata: ~  
          - add_docker_metadata: ~  
          - add_kubernetes_metadata: ~  
         # ================================== Logging ===================================  
         # Sets log level. The default log level is info.  
         # Available log levels are: error, warning, info, debug  
         #logging.level: debug  
         # At debug level, you can selectively enable logging only for some components.  
         # To enable all selectors, use ["*"]. Examples of other selectors are "beat",  
         # "publisher", "service".  
         #logging.selectors: ["*"]  
         # ============================= X-Pack Monitoring ==============================  
         # Filebeat can export internal metrics to a central Elasticsearch monitoring  
         # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The  
         # reporting is disabled by default.  
         # Set to true to enable the monitoring reporter.  
         #monitoring.enabled: false  
         # Sets the UUID of the Elasticsearch cluster under which monitoring data for this  
         # Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch  
         # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.  
         #monitoring.cluster_uuid:  
         # Uncomment to send the metrics to Elasticsearch. Most settings from the  
         # Elasticsearch outputs are accepted here as well.  
         # Note that the settings should point to your Elasticsearch *monitoring* cluster.  
         # Any setting that is not set is automatically inherited from the Elasticsearch  
         # output configuration, so if you have the Elasticsearch output configured such  
         # that it is pointing to your Elasticsearch monitoring cluster, you can simply  
         # uncomment the following line.  
         #monitoring.elasticsearch:  
         # ============================== Instrumentation ===============================  
         # Instrumentation support for the filebeat.  
         #instrumentation:  
           # Set to true to enable instrumentation of filebeat.  
           #enabled: false  
           # Environment in which filebeat is running on (eg: staging, production, etc.)  
           #environment: ""  
           # APM Server hosts to report instrumentation results to.  
           #hosts:  
           # - http://localhost:8200  
           # API Key for the APM Server(s).  
           # If api_key is set then secret_token will be ignored.  
           #api_key:  
           # Secret token for the APM Server(s).  
           #secret_token:  
         # ================================= Migration ==================================  
         # This allows to enable 6.7 migration aliases  
         #migration.6_to_7.enabled: true  
    • We don't need any such configuration for Logstash but we will be setting up pipeline configurations in Logstash to read data from log files and send to elastic servers.
  • Setting up Logstash Pipelines
    • Pipeline consist of three things
      • Input -> Source of data.
      • Filter -> Data not to be sent.
      • Output ->  Where we want to send data.
    • Download some dummy, Apache logs from web these will be our input or source data.
    • Create file apachelog.conf inside /etc/logstash/conf.d folder.
      • You can copy the logstash-sample.conf in /etc/logstash folder or you can use the below sample
      •  input {
          file {
           path => "/home/springimplant/logstash_logs/apache.log"
           start_position => "beginning"
           sincedb_path => "/dev/null"
          }  
         }
         filter {  
          grok {  
              match => {"message" => "%{COMBINEDAPACHELOG}"}  
             }  
          date {  
              match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]  
              }  
         }  
         output {  
          elasticsearch {  
           hosts => ["localhost:9200"]  
           index => "javaimplant-prd-1"  
           # index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"  
           # user => "elastic"  
           # password => "elastic"  
          }  
         }
        
    • This is our pipeline configuration file to inject data into elastic search as discussed earlier it has 3 sections input,filter and output
    • Execute the pipeline using
      • sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/apachelog.conf
    • Below is another example of logstash pipeline configuration file to read data from csv file.
      • The logstash collects data from a static csv file and analyze them using Kibana.
      •  input {  
          file {  
           path => "/home/gauravmatta/logstash_logs/data.csv"  
           start_position => beginning  
          }  
         }  
         filter {  
          csv {  
              columns => [  
                 "time_ref",  
                 "account",  
                 "code",  
                 "country_code",  
                 "product_type",  
                 "value",  
                 "status"  
              ]  
              separator => ","  
            }  
         }  
         output {  
          elasticsearch {  
           hosts => ["localhost:9200"]  
           action => "index"  
           index => "csv-prd-1"  
          }  
         }  
  • Create a view in Kibana
    • Login to Kibana click on discover, click on create a data view, Select time filter field.
    • Click Save Data View to Kibana.
    • Select the date Range of your logs and you will see the necessary statistics.

Saturday, October 21, 2023

Frameworks in Spring

  •  Technology's / frameworks in Spring
    • Spring core
    • Spring MVC
    • Spring boot
    • Spring data
    • Hibernate

Thursday, October 12, 2023

minimum web version required to use JSTL

 Q What is the minimal web version required to use JSTL?

And : 2.4

For example following tag from web.xml uses web 4.0

 <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd" version="4.0">  
 </web-app>  
d

Thursday, September 28, 2023

Create a bean of type java.util.properties

To create a bean of type java.util.properties in Java xml use the following code.


 <bean id="attendkey" class="java.util.Properties" name="attendenceKeys"/>  
 <bean id="attendkey.load" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean">  
   <property name="targetObject"><ref bean="attendkey"/></property>  
   <property name="targetMethod"><value>putAll</value></property>  
   <property name="arguments">  
        <props>  
             <prop key="Present">1</prop>  
             <prop key="Absent" >-1</prop>  
       </props>  
  </property>  
 </bean>  

Example :

https://github.com/gauravmatta/springmvc/blob/master/book%20management%20system/src/main/java/com/springimplant/xml/config.xml

Enable Java EE Annotations, Java 9 onwards

  • Java EE or Java, enterprise Edition is not available by default since Java 9.
  • To access Java EE we need to include following library in Java 9
    • javax.annotation-api
  • Since @ Postconstruct and @Predestroy annotation Are parts of Java EE. We need to use following tag in our beans xml To enable all Java EE beans.
    • <context:annotation-config />
  • However, we may not require to enable all Java EE annotations in some cases Which is also a good programming practice.In such cases To enable only a single bean and it’s corresponding annotation we must declare that class as a bean Explicit in our xml file.
    • For example to Enable @postconstruct and @predestroy annotations, we must Declare following bean explicitly
      • <bean class="org.springframework.context.annotation.CommonAnnotationBeanPostProcessor"/>

Saturday, August 19, 2023

Install Activiti Plugin Eclipse

Step 1: Open your IDE and click on Help » Install New Software…

Step 2: Click on the Add button and fill the details like Name: Activiti Designer (you can choose any name) Location: https://www.activiti.org/designer/update/ and then hit the OK button.

Finally proceed with next, next…, if it is able to load the contents.

If the above method did not work,

Step 1: Download zip from http://www.activiti.org/designer/archived/activiti-designer-5.18.0.zip

Step 2: Open your IDE and click on Help » Install New Software…

Step 3: Click on the Add button and fill the details like Name: Activiti Designer (you can choose any name) Location: [path/to/your/downloaded/zip/file] and then hit the OK button.


Sunday, April 30, 2023

AWS services in Spring Boot

  • Considering our Fargate Container has the hole to access the and execute AWS Services. We can use following code to access the Different Services. 
  • Remember The role of Fargate container is different from Build and Deployment creator role so the role of tools like Code Build,EC2 Image Builder may be different than role of Fargate container. The role of Fargate container or any other container in which our JVM is running and executing code matters here.
Q How can we call a Step Function using Service?
  •  private AWSStepFunctions awsStepfunctions;  
     awsStepfunctions = AWSStepFunctionsClientBuilder.standard().build();  
     //Step Function Parameters  
     JSONObject sfnInput = new JSONObject();  
     sfnInput.put("NAME", dataSetup.getname());  
     sfnInput.put("F_DATE", parsedfdate);  
     sfnInput.put("T_DATE", parsedtdate);  
          if (!StringUtils.hasLength(arn)) {  
                    arn = this.awsStateMachineArn;  
               }  
     try {  
                    StartExecutionRequest executionRequest = new StartExecutionRequest().withStateMachineArn(arn)  
                              .withInput(sfnInput.toString());  
                    StartExecutionResult result = this.awsStepfunctions.startExecution(executionRequest);  
                    log.info("Response from Execution " + arn + "=====>" + result.toString());  
               } catch (Exception e) {  
                    log.info(e.toString());  
               }  
sdf

Configure ELK Stack on Fedora

Information about ELK can be found on  https://springimplant.blogspot.com/p/elk-stack.html Installing ELK on Fedora Install Java Add ELK to ...