In Conversation with ELK stack| Part1

Question: How did ElasticSearch started ?

Answer: Let’s start off with elastic stack, and the components within it and how they fit together. So, elastic search is just one piece of this system. It started off as basically a scalable version of the Lucene open source search framework, and it just added the ability to horizontally scale Lucene indices, so we’ll talk about shards of elastic search.

Question: What is a shard in ElasticSearch ?

Answer: Each shard in elastic search is just a single Lucene inverted index of documents, so every shard is an actual Lucene instance of its own. However, elastic search has evolved to be much more than just Lucene spread out across a cluster.

Question: In what all aspects, can the ElasticSearch be used, now a days ?

Answer:

  • It can be used for much more than full text search now.
  • It can actually handle structure data very well.
  • It can aggregate data very quickly.

So, it’s not just for search, you can handle structure data of any type and you’ll see it’s often used for things like aggregating logs and things like that.

Question: How is Elastic placed in comparison to other technologies?

Answer: The really cool thing about the ElasticSearch is that, it’s often a much faster solution than things like Hadoop or Spark or Flink. We would be actually building in new things into the elastic search all the time, with things like graph visualisation and machine learning that actually make elastic search a competitor for things like Hadoop and Spark and Flink, only it can give you an answer in milliseconds, instead of in hours. So, for the right sorts of use cases, elastic search can be a very powerful tool and not just for search.

ElasticSearch fundamentals

Question: What is ElasticSearch at very low-level ?

Answer: At a low level, it’s really just about handling JSON requests, so we’re not talking about pretty UIs or graphical interfaces, when we’re just talking about elastic search itself, we’re talking about a server that can process JSON requests and give you back JSON data, and it’s up to you to actually do something useful with that. So for example, we’re using curl here to actually issue an request with a GET verb for a given index called “tags”, and we’re just searching everything that’s in it and you can see the results come back in JSON format here, and it’s up to you to parse all of this. For example, we did get one result here called, for the movie, “Swimming to Cambodia”, which has a given user I.D. and a tag of “Cambodia”. So if this is part of a tags index that we’re searching, this is what a result might actually look like. So, just to make it real, that’s a sort of output you can expect from elastic search itself.

First Querying with ES

Question: Do we have some sort of User-Interface, which can ease our visualisation ?

Answer: There’s also Kibana, which sits on top of ElasticSearch and that’s what gives you a pretty web UI. So if you’re not building your own application on top of elastic search or your own web application, Kibana can be used just for searching and visualising what’s in your search index graphically.

Kibana Introduction

Question: What all operations, we can use Kibana ?

Answer:

  • It can do very complex aggregations of data.
  • It can graph your data and it can create charts.
  • It’s often used to do things like log analysis — so if you’re familiar with things like Google Analytics, the combination of elastic search and Kibana can be used as sort of a way to roll your own Google Analytics at a very large scale. So here’s an actual screenshot from Kibana looking at some real log data.
  • We can also visualise things like; where the hits on my web site are coming from, and where are the error response codes and how are they all broken down, and what’s my distribution of URLs, whatever you can dream up.
Complex data analysis with Kibana

Question: Ok, For what does FileBeat/Logstash are usually used for ?

Answer: We can also have something called logstash in the Beats framework, and these are ways of actually publishing data into elastic search, in real time, in a streaming format. So if you have for example, a collection of web server logs coming in, that you just want to feed into your search index over time automatically, FileBeat can just sit on your web servers and look for new log files and parse them out, structure them in the way that elastic search wants, and then feed them into your elastic search cluster as they come in.

Question: Can we directly use the LogStash to publish data into ES ?

Answer: Logstash does much the same thing, it can also be used to push data around between your servers and elastic search, but often it’s used sort of an intermediate step, so we usually use a very lightweight FileBeat client that would sit on your web servers, logstash would accept those and sort of collect them and pool them up for feeding into elastic search over time.

Question: What does X-pack means ?

Answer: Finally, another piece of the elastic stack is called X-pack. This is actually a paid add on offered by elastic.co, and it offers things like security and alerting and monitoring and reporting, features like that. It also contains some of the more advanced features that are just starting to make it into elastic search now, such as machine learning and graph exploration, so you can see that with X-Pack, elastic search starts to become a real competitor for much more complex and heavy weight systems like Flink and Spark. But that’s another piece of the elastic stack when we talk about this larger ecosystem.

Question: What are various free-parts of X-pack ?

Answer: There are free parts of X-Pack, like :-

  • The monitoring framework, that lets you quickly visualize what’s going on with your cluster.
  • To check, what’s my cpu utilisation system load?
  • How much memory you have available?

So, when things start to go wrong with your cluster, this is a very useful tool to have for understanding the health of your cluster. So that’s it at a high level, the elastic stack. Obviously elastic search can still be used for a powering search on a web site like Wikipedia or something, but with these components it can be used for so much more.

Question: How does a usual anatomy of Request to ES looks like ?

Question: What all requests can be used to interact with ES ?

Question: What are the guiding principles of REST ?

Question: Whats the most usual way of querying to ES ?

Question: OK, show me some sample querying operation from ES ?

  • We’re saying curl dash H content type application json, that’s sending a http header that says that the data in the body is going to be in json format.
  • Dash X get, means that we’re using the get method or the get verb, depending on your terminology, meaning that we just want to retrieve information back from elastic search, we’re not asking you to change anything
  • The URL, as you can see, includes the host that we’re talking to, in this case 127.0.01 which is the local loop back address where your local host, elastic search runs on port 9200 by default, followed by the index name which is shakespeare, and then followed by underscore search, meaning that we want to process a search query as part of this request.
  • The question mark pretty is a query line parameter, that means that we want to get the results back in a nicely formatted human readable format, because we’re gonna be looking at it on the command line.
  • And finally, we have the request body itself specified after a dash D and between single quotes, and if you’ve never seen json before, this is what it looks like, it’s just a structured data format where each level is contained within curly brackets, so it’s always contained by curly brackets at the top level.
  • Then we’re saying we have a query level, and within those brackets, we’re saying we have a match phrase command that matches the text entry ‘to be or not to be’.

So that is how you would construct a real search query in elastic search, using nothing but a http request.

Question: OK, show me some sample query to Ingest data into ES ?

  • So in this one we’re using a put verb. Again, to 127.0.0.1 on point 9200.
  • This time we’re talking to an index called movies and a data type called movie, and it’s using a unique identifier for this new entry called 109487.
  • Under movie I.D 109487 we’re including the following information in the message body. The genre is actually a list of genres, and in json that will be a comma delimited list of stuff that’s enclosed in square brackets, so this particular movie is both the IMAX and sci fi categories, its title is Interstellar, and it came out in the year 2014. So that’s what some real http requests look like, when you’re dealing with elastic search.

Question: What is a Document in terminology of ElasticSearch ?

Question: What is an Index in terminology of ElasticSearch ?

Question: What is actually an Inverted-Index in terminology of ElasticSearch ?

From above example, the inverted index would match the word, final, as a search term to document one.

Note: Now it’s a little bit more complicated than that in practice and in reality it actually stores not only what documents end but also the position within the document that it’s in. But at a high conceptual level, this is the basic idea. An inverted index is what you’re actually getting with a search index, where it’s mapping things that you’re searching for to the documents of those things

Question: How do I actually deal with the concept of relevance?

Enhanced Question : Let’s take — for example — the word the, how do I deal with that? The word the is going to be a very common word in every single document, so how do I make sure that only documents where the is a special word are the ones that I get back?

Answer: Well that’s where TF IDF comes in, that stands for a term frequency times inverse document frequency, it’s a very fancy-sounding term but it’s actually a very simple concept.

  • Term frequency is just how often a given search term appears within a given document. So if the word space occurs very frequently in a given document, it would have a high term frequency.
  • Now document frequency is just how often a term appears in all of the documents in your entire index. So the word space probably doesn’t occur very often across the entire index, so it would have a low document frequency. However, the word does appear in all documents pretty frequently, so it would have a very high document frequency.
  • Next, if we divide term frequency by document frequency, mathematically we get a measure of relevance. So we see how special this term is to the document. It measures not only how often does this term occur within the document, but how does that compare to how often this term occurs in documents across the entire index?

So with that example, the word, space, in an article about space would rank very highly. However, the word the wouldn’t necessarily rank very highly at all, that’s a common term found in every other document as well.

Question: How does search engines work?

Answer: Aforesaid is the basic idea of how search engines work. If you’re searching for a given term, it will try to give you back results in the order of their relevancy. Relevancy is loosely based at least on the concept of TF-IDF, it’s not really that complicated.

Question: How does ElasticSearch7 differs then previous versions ?

Question: How does ElasticSearch scales actually ?

  • ElasticSearch main scaling trick is that, an index is split into what we call shards, and every shard is basically a self-contained instance of lucene.
  • The idea is that if you have a cluster of computers, you can spread these shards out across multiple different machines, as you need more capacity, you can just throw more machines into your cluster and add more shards to that entire index so that it can spread that load out more efficiently.
  • So that’s the basic idea, we just distribute our index among very many different shards and a different shard can live on different computers within your cluster.

Question: How does ElasticSearch takes our search-query to appropriate Shard ?

Answer: The way it works is, once you actually talk to a given server on your cluster for elastic search, it figures out what document you’re actually interested in, it can hash that to a particular shard I.D. So we have some mathematical function that can very quickly figure out which shard owns the given document and it can redirect you to the appropriate shard in your cluster very quickly.

Question: Can ElasticSearch provide Resiliency ?

Enahnced Question :- One big problem that you have when you have a cluster of computers is that those computers can fail sometimes and you need to deal with that. How can this be done ?

Answer:- One of the Industry-adopted way of building resiliency with ES, is to have an index that has two primary shards and two replicas. So, in this example, we can have three nodes and a node is basically an installation of elastic-search. Usually, you’ll see one node installed per physical server in your cluster. Intention is that, if any given node in your cluster goes down you won’t even see it as an end user, you can handle that failure. So let’s take a look at what’s going on here.

Question: What’s the meaning of a primary-shard ?

Answer:- In this example, I have two primary shards, that means that those are basically the primary data-holders and that’s where write requests are going to be routed to initially. That data will then be replicated to the replica shards, which can also handle reader requests whenever we want to. ES manages this redundancy for you under the hoods.

Question: How does ES builds Fail-Over here ?

Answer:- Let’s say that node one were to fail for some reason, had a disk failure where the power supply burned out or something like that. So in this case, we’re going to lose primary shard 1 and replica shard 0, but it’s not a big deal because :-

  • We have one replica of shard one sitting on node two.
  • Another one replica of shard one is also sitting on node three.

Question: So what would happen if node one just suddenly went away?

Answer:- Elastic search would figure that out and it would elect one of the replica nodes on node two or three to be the new primary node, and since we have those replicas sitting there, everything will be fine. We can just keep on accepting new data and we can keep on servicing read requests, because we’re now down to one primary and one replica, and that should be able to get us by until we can restore that capacity that we lost with node number one.

Question: Similarly what would happen let’s say, if node number three goes away ?

Answer:- We lost our primary node zero, but it’s okay because :-

  • We had a replica of PRIMARY-Shard-ZERO sitting on node one.
  • We also had a replica of PRIMARY-Shard-ZERO sitting on node two.

ElasticSearch can just basically promote one of those replicas to be the new primary, and it can get by until we can restore the capacity that we lost. So this way, we have built a very fault tolerance system.

Question: How much we can tolerate in aforesaid setup, i.e. what’s the maximum number of nodes if they goes away, we would still operate well ?

Answer:- We could lose multiple nodes, So, we could in fact even tolerate node one and node two going away at the same time in which case we’d be left with a primary on node three for both of the shards that we care about. So it’s pretty clever how that all works.

Question: What’s the advisable number of nodes in a usual cluster, that we should have ?

Answer:- It’s a good idea to have an odd number of nodes for this sort of resiliency that we’re talking about.

Question: What approach we should take for interacting with ES cluster ?

Answer:- We should just round robin our request as an application among all the different nodes in your cluster. It would spread out the load to that initial traffic, maybe our application manages distributing those requests across different nodes, or maybe we can have some sort of load balancer device that does that for us.

Question: What happens when you write new data or read data from your cluster ?

Answer:-

  • Case of Writing to ES :- So let’s say you’re indexing a new document into elastic search that’s going to be a write requests. Now when you do that, whatever node you talk to will say okay, here’s where the primary shard lives for this document that you’re trying to index, I’m going to redirect you to where that primary shard lives. So you’ll go write that data, index it into the primary shard, wherever that node lives on, and then that will automatically get replicated to any replicas for that shard.
  • Case of Reading from ES :-Now when you read, that’s a little bit quicker, they just route it to the primary shard or to any replica of that shard. So that can spread out the load of reads even more efficiently. So the more replicas you have, you’re actually increasing your read capacity for the entire cluster.

Question: Having more-number of replicas (i.e. as we saw above, we have 2 additional replicas shards, other than primary shard), a good approach Industry wide?

Answer:- Yes, that’s absolutely suggestible, because a lot of applications of elastic search are very read heavy. You know, if you’re actually powering a search that’s on a big Web site like Wikipedia, you’re going to get a lot more read requests from the world than you’re going to have indexes for new documents, so it’s not quite as bad as it sounds. For a lot of applications, oftentimes you can just add more replicas to your cluster later on to add more read capacity.

Question: Can we modify the number of Primary-Shards with Cluster ?

Answer:- An unfortunate thing is that you cannot change the number of primary shards in your cluster later on, you need to define that right when you’re creating your index up front.

Question: How do we increase our overall write capacity?

Answer:- If you do need to add more write capacity, you can always re-index your data into a new index and copy it over if you need to, but you do want to plan ahead and make sure that you have enough primary shards up front to handle any growth that you might reasonably expect in the near future. Here, we’re defining the number of shards. So in this example, we’re saying we want to three primary shards and one replica.

Question: How does the syntax for creating an Index looks like ?

Answer:- Here by the way is what that syntax for that would look like through a rest request. We would specify a put verb on our rest request with the index name, followed by a setting structure, and json that defines the number of primary shards and the number of replicas.

Question: How many shards do we actually end up with in above shown request?

Answer:- Well, the answer is actually six.

  • So we’re saying that we want three primary shards and one replica of each of those primary shards, so you see how that adds up. We have three primaries, times one replica per primary, which is three total replicas plus the three original primaries which gives us six.
  • If we add two replicas, we would end up with nine total shards, right
  • Three primaries and then a total of six replicas, to give us two replica shards for each primary shard.So that’s how that math works out. It can be a little bit confusing sometimes, but that’s the idea.

That’s the general idea of how elastic search scales and how its architecture works. The important concepts here are primary and replica shards and how elastic search will automatically distribute those shards across different nodes that live on different servers in your cluster to provide resiliency against failure of any given node.

If you read through till here, Go ahead and Clap please. We would now see you in next series.

References :-

--

--

--

Software Engineer for Big Data distributed systems

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Kafka vs RedPanda Benchmark (also Tarantool and Clickhouse as queue)

Hacking For Beginners

Software Testing

Learn to Access Java Database With Jakarta Data

The Evolution Of DevOps And Why We Are Here

13. Insecure Input Validation — Part 3

The Differences Between Python Lists and Numpy Arrays

How to use UUIDs instead of auto-increment IDs in your Laravel app

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
aditya goel

aditya goel

Software Engineer for Big Data distributed systems

More from Medium

In Conversation with ELK Hands-On| Part2

Configure Web Server on AWS Cloud

Conventional User Authentication & Authorization in Web Application

How to setup streaming replication in PostgreSQL step by step on Ubuntu