Skip to content

Skip UTF8 to UTF16 conversion during document indexing #126492

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 57 commits into from
Jun 6, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
2153322
Prototype avoid UTF8 to UTF16 conversion
jordan-powers Apr 7, 2025
eaa66bb
Rename to ESBytesRef
jordan-powers Apr 8, 2025
c5d71f7
Apply spotless
jordan-powers Apr 8, 2025
a277aea
Some cleanup and comments
jordan-powers Apr 8, 2025
74822c2
Remove unnecessary throws IOException
jordan-powers Apr 8, 2025
a9ee991
Fix missing bytesValue call
jordan-powers Apr 8, 2025
ae432e1
Fix subsequent calls to parser.getText() after a call to parser.getVa…
jordan-powers Apr 8, 2025
3d1bec4
Use cached stringEnd on subsequent calls to getValueAsByteRef()
jordan-powers Apr 8, 2025
702ada2
Spotless
jordan-powers Apr 8, 2025
e2ebb92
Add textRefOrNull to DotExpandingXContentParser
jordan-powers Apr 8, 2025
4a6fe60
Add textRefOrNull() override to MultiFieldParser
jordan-powers Apr 8, 2025
240fbdb
Avoid cloning ByteSourceJsonBootstrapper
jordan-powers Apr 15, 2025
c1affcf
Rename ESBytesRef to XBytesRef
jordan-powers Apr 15, 2025
ddf7495
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 15, 2025
e285349
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 16, 2025
b0f3336
Add tests for ESJsonFactory
jordan-powers Apr 16, 2025
f2f106f
Add tests for ESUTF8StreamJsonParser
jordan-powers Apr 16, 2025
b0f701c
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 16, 2025
20616e6
Move RawString class into separate file and rename to EncodedString
jordan-powers Apr 17, 2025
1f66ff2
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 17, 2025
fd4ec6c
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 18, 2025
8913ca5
Combine XBytesRef and EncodedString into XContentString
jordan-powers Apr 22, 2025
082ffeb
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 22, 2025
3526556
Add missing override for new xContentText()
jordan-powers Apr 22, 2025
0ca2b60
Fix override in DotExpandingXContentParser
jordan-powers Apr 23, 2025
c12f1b1
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 23, 2025
9c88362
Fix GeoPointFieldMapper geohash
jordan-powers Apr 24, 2025
0d7ff66
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 24, 2025
d413cd3
Add some more tests
jordan-powers Apr 28, 2025
deefb81
Split Text and BytesReference and move base api to libs/core
jordan-powers Apr 28, 2025
681ce38
Use new BaseText and BaseBytesReference types
jordan-powers Apr 28, 2025
b31c2e0
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 28, 2025
603af45
Implement TODO UTF8 to UTF16 conversion
jordan-powers Apr 28, 2025
68d6c2d
Rename xContentText to optimizedText
jordan-powers Apr 30, 2025
1364683
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Apr 30, 2025
70202da
Rename xContentText in tests too
jordan-powers Apr 30, 2025
b3f4e04
Revert "Split Text and BytesReference and move base api to libs/core"
jordan-powers May 1, 2025
a40bee3
Move Text to :libs:x-content
jordan-powers May 1, 2025
84921aa
Use Text instead of XContentString
jordan-powers May 1, 2025
dbbdbb1
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers May 1, 2025
9380a0b
Fix missed reference to XContentString
jordan-powers May 1, 2025
ca03f87
Fix CI
jordan-powers May 5, 2025
180078c
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers May 5, 2025
b3a0bbf
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers May 8, 2025
8e36b5a
Rename test in BaseXContentTestCase to match
jordan-powers May 8, 2025
fb2394f
Update optimizedText to return XContentString interface
jordan-powers May 8, 2025
b9dc1da
Fix renamed length to stringLength
jordan-powers May 8, 2025
c38ff8a
Add OptimizedTextBenchmark
jordan-powers May 9, 2025
6c6b11e
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers May 9, 2025
de33cb6
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Jun 4, 2025
d75f180
Use new UTF8Bytes class
jordan-powers Jun 4, 2025
bcb195e
Use unsigned comparison in UTF8Bytes#compareTo
jordan-powers Jun 5, 2025
4c76525
Use encoded value when recording array offsets
jordan-powers Jun 5, 2025
986d1f6
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Jun 5, 2025
d3b9496
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Jun 5, 2025
9b53320
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Jun 6, 2025
91313b5
Merge remote-tracking branch 'upstream/main' into prototype-skip-utf16
jordan-powers Jun 6, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Move Text to :libs:x-content
  • Loading branch information
jordan-powers committed May 1, 2025
commit a40bee33230435fa14b7cf863d27c55159ab166e
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,14 @@
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/
package org.elasticsearch.common.text;

import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.xcontent.ToXContentFragment;
import org.elasticsearch.xcontent.XContentBuilder;
package org.elasticsearch.xcontent;

import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;

/**
* Both {@link String} and {@link BytesReference} representation of the text. Starts with one of those, and if
* Both {@link String} and {@link ByteBuffer} representation of the text. Starts with one of those, and if
* the other is requests, caches the other one in a local reference so no additional conversion will be needed.
*/
public final class Text implements Comparable<Text>, ToXContentFragment {
Expand All @@ -36,31 +31,37 @@ public static Text[] convertFromStringArray(String[] strings) {
return texts;
}

private BytesReference bytes;
private ByteBuffer bytes;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good change, but also that it might be better if this gets done in a smaller, self-contained preliminary PR.
It will reduce the complexity of this one, can be merged quickly, and it would be helpful to pinpoint and/or reduce the blast radius in case of problems

private String text;
private int hash;
private int length = -1;

public Text(BytesReference bytes) {
public Text(ByteBuffer bytes) {
this.bytes = bytes;
}

public Text(ByteBuffer bytes, int length) {
this.bytes = bytes;
this.length = length;
}

public Text(String text) {
this.text = text;
}

/**
* Whether a {@link BytesReference} view of the data is already materialized.
* Whether a {@link ByteBuffer} view of the data is already materialized.
*/
public boolean hasBytes() {
return bytes != null;
}

/**
* Returns a {@link BytesReference} view of the data.
* Returns a {@link ByteBuffer} view of the data.
*/
public BytesReference bytes() {
public ByteBuffer bytes() {
if (bytes == null) {
bytes = new BytesArray(text.getBytes(StandardCharsets.UTF_8));
bytes = StandardCharsets.UTF_8.encode(text);
}
return bytes;
}
Expand All @@ -76,7 +77,20 @@ public boolean hasString() {
* Returns a {@link String} view of the data.
*/
public String string() {
return text == null ? bytes.utf8ToString() : text;
if (text == null) {
text = StandardCharsets.UTF_8.decode(bytes).toString();
}
return text;
}

/**
* Returns the number of characters in the represented string
*/
public int length() {
if (length < 0) {
length = string().length();
}
return length;
}

@Override
Expand Down Expand Up @@ -115,8 +129,8 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
} else {
// TODO: TextBytesOptimization we can use a buffer here to convert it? maybe add a
// request to jackson to support InputStream as well?
BytesRef br = this.bytes().toBytesRef();
return builder.utf8Value(br.bytes, br.offset, br.length);
assert bytes.hasArray();
return builder.utf8Value(bytes.array(), bytes.arrayOffset() + bytes.position(), bytes.remaining());
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,6 @@
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.common.util.CollectionUtils;
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
import org.elasticsearch.common.util.concurrent.CountDown;
Expand All @@ -56,6 +55,7 @@
import org.elasticsearch.tasks.TaskManager;
import org.elasticsearch.threadpool.Scheduler;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.xcontent.Text;

import java.util.ArrayList;
import java.util.Collections;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.core.TimeValue;
import org.elasticsearch.xcontent.Text;

import java.io.IOException;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@
import org.elasticsearch.common.collect.ImmutableOpenMap;
import org.elasticsearch.common.geo.GeoPoint;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.common.util.Maps;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.core.CharArrays;
import org.elasticsearch.core.Nullable;
import org.elasticsearch.core.TimeValue;
import org.elasticsearch.xcontent.Text;

import java.io.EOFException;
import java.io.FilterInputStream;
Expand Down Expand Up @@ -391,13 +391,17 @@ public Text readOptionalText() throws IOException {
if (length == -1) {
return null;
}
return new Text(readBytesReference(length));
var byteBuffs = BytesReference.toByteBuffers(readBytesReference(length));
assert byteBuffs.length == 1;
return new Text(byteBuffs[0]);
}

public Text readText() throws IOException {
// use StringAndBytes so we can cache the string if it's ever converted to it
// use Text so we can cache the string if it's ever converted to it
int length = readInt();
return new Text(readBytesReference(length));
var byteBuffs = BytesReference.toByteBuffers(readBytesReference(length));
assert byteBuffs.length == 1;
return new Text(byteBuffs[0]);
}

@Nullable
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,11 @@
import org.elasticsearch.common.geo.GeoPoint;
import org.elasticsearch.common.io.stream.Writeable.Writer;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.common.util.ByteUtils;
import org.elasticsearch.core.CharArrays;
import org.elasticsearch.core.Nullable;
import org.elasticsearch.core.TimeValue;
import org.elasticsearch.xcontent.Text;
import org.elasticsearch.xcontent.XContentType;

import java.io.IOException;
Expand Down Expand Up @@ -419,7 +419,7 @@ public void writeText(Text text) throws IOException {
writeInt(spare.length());
write(spare.bytes(), 0, spare.length());
} else {
BytesReference bytes = text.bytes();
BytesReference bytes = BytesReference.fromByteBuffer(text.bytes());
writeInt(bytes.length());
bytes.writeTo(this);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.common.util.Maps;
import org.elasticsearch.common.xcontent.ChunkedToXContent;
import org.elasticsearch.common.xcontent.XContentHelper;
Expand All @@ -39,6 +38,7 @@
import org.elasticsearch.search.lookup.Source;
import org.elasticsearch.transport.LeakTracker;
import org.elasticsearch.transport.RemoteClusterAware;
import org.elasticsearch.xcontent.Text;
import org.elasticsearch.xcontent.ToXContentFragment;
import org.elasticsearch.xcontent.ToXContentObject;
import org.elasticsearch.xcontent.XContentBuilder;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.core.Nullable;
import org.elasticsearch.index.shard.ShardId;
import org.elasticsearch.transport.RemoteClusterAware;
import org.elasticsearch.xcontent.Text;

import java.io.IOException;
import java.util.Objects;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@
import org.elasticsearch.common.CheckedSupplier;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.mapper.IdFieldMapper;
import org.elasticsearch.index.mapper.MappedFieldType;
Expand All @@ -34,6 +33,7 @@
import org.elasticsearch.lucene.search.uhighlight.Snippet;
import org.elasticsearch.search.fetch.FetchContext;
import org.elasticsearch.search.fetch.FetchSubPhase;
import org.elasticsearch.xcontent.Text;

import java.io.IOException;
import java.text.BreakIterator;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@
import org.apache.lucene.search.vectorhighlight.SingleFragListBuilder;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.common.util.CollectionUtils;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.index.mapper.TextSearchInfo;
Expand All @@ -33,6 +32,7 @@
import org.elasticsearch.search.fetch.subphase.highlight.SearchHighlightContext.Field;
import org.elasticsearch.search.fetch.subphase.highlight.SearchHighlightContext.FieldOptions;
import org.elasticsearch.search.lookup.Source;
import org.elasticsearch.xcontent.Text;

import java.io.IOException;
import java.text.BreakIterator;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.xcontent.Text;
import org.elasticsearch.xcontent.ToXContentFragment;
import org.elasticsearch.xcontent.XContentBuilder;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,12 @@
import org.apache.lucene.search.highlight.TextFragment;
import org.apache.lucene.util.BytesRefHash;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.mapper.MappedFieldType;
import org.elasticsearch.lucene.search.uhighlight.QueryMaxAnalyzedOffset;
import org.elasticsearch.search.fetch.FetchContext;
import org.elasticsearch.search.fetch.FetchSubPhase;
import org.elasticsearch.xcontent.Text;

import java.io.IOException;
import java.util.ArrayList;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.core.Nullable;
import org.elasticsearch.index.fielddata.IndexFieldData;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.sort.SortAndFormats;
import org.elasticsearch.xcontent.ParseField;
import org.elasticsearch.xcontent.Text;
import org.elasticsearch.xcontent.ToXContentObject;
import org.elasticsearch.xcontent.XContentBuilder;
import org.elasticsearch.xcontent.XContentParser;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,13 @@
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.rest.action.search.RestSearchAction;
import org.elasticsearch.search.aggregations.Aggregation;
import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry;
import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry.Option;
import org.elasticsearch.search.suggest.completion.CompletionSuggestion;
import org.elasticsearch.xcontent.ParseField;
import org.elasticsearch.xcontent.Text;
import org.elasticsearch.xcontent.ToXContentFragment;
import org.elasticsearch.xcontent.XContentBuilder;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@
import org.apache.lucene.search.suggest.document.TopSuggestDocs;
import org.apache.lucene.search.suggest.document.TopSuggestDocsCollector;
import org.apache.lucene.util.CharsRefBuilder;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.index.mapper.CompletionFieldMapper;
import org.elasticsearch.search.suggest.Suggest;
import org.elasticsearch.search.suggest.Suggester;
import org.elasticsearch.xcontent.Text;

import java.io.IOException;
import java.util.Collections;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.common.util.Maps;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.suggest.Suggest;
import org.elasticsearch.xcontent.ParseField;
import org.elasticsearch.xcontent.Text;
import org.elasticsearch.xcontent.XContentBuilder;

import java.io.IOException;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@
import org.apache.lucene.util.BytesRefBuilder;
import org.apache.lucene.util.CharsRefBuilder;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.index.query.AbstractQueryBuilder;
import org.elasticsearch.index.query.ParsedQuery;
import org.elasticsearch.index.query.QueryBuilder;
Expand All @@ -31,6 +30,7 @@
import org.elasticsearch.search.suggest.Suggester;
import org.elasticsearch.search.suggest.SuggestionSearchContext.SuggestionContext;
import org.elasticsearch.search.suggest.phrase.NoisyChannelSpellChecker.Result;
import org.elasticsearch.xcontent.Text;
import org.elasticsearch.xcontent.XContentFactory;
import org.elasticsearch.xcontent.XContentParser;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@

import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.search.suggest.Suggest;
import org.elasticsearch.search.suggest.Suggest.Suggestion;
import org.elasticsearch.xcontent.Text;

import java.io.IOException;
import java.util.Objects;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,13 @@
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.BytesRefBuilder;
import org.apache.lucene.util.CharsRefBuilder;
import org.elasticsearch.common.bytes.BytesArray;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.search.suggest.Suggester;
import org.elasticsearch.search.suggest.SuggestionSearchContext.SuggestionContext;
import org.elasticsearch.search.suggest.phrase.DirectCandidateGenerator;
import org.elasticsearch.xcontent.Text;

import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;

Expand All @@ -47,7 +47,8 @@ public TermSuggestion innerExecute(String name, TermSuggestionContext suggestion
indexReader,
suggestion.getDirectSpellCheckerSettings().suggestMode()
);
Text key = new Text(new BytesArray(token.term.bytes()));
var termBytes = token.term.bytes();
Text key = new Text(ByteBuffer.wrap(termBytes.bytes, termBytes.offset, termBytes.length));
TermSuggestion.Entry resultEntry = new TermSuggestion.Entry(key, token.startOffset, token.endOffset - token.startOffset);
for (SuggestWord suggestWord : suggestedWords) {
Text word = new Text(suggestWord.string);
Expand Down Expand Up @@ -96,7 +97,8 @@ protected TermSuggestion emptySuggestion(String name, TermSuggestionContext sugg
TermSuggestion termSuggestion = new TermSuggestion(name, suggestion.getSize(), suggestion.getDirectSpellCheckerSettings().sort());
List<Token> tokens = queryTerms(suggestion, spare);
for (Token token : tokens) {
Text key = new Text(new BytesArray(token.term.bytes()));
var termBytes = token.term.bytes();
Text key = new Text(ByteBuffer.wrap(termBytes.bytes, termBytes.offset, termBytes.length));
TermSuggestion.Entry resultEntry = new TermSuggestion.Entry(key, token.startOffset, token.endOffset - token.startOffset);
termSuggestion.addTerm(resultEntry);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@

import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.search.suggest.SortBy;
import org.elasticsearch.search.suggest.Suggest;
import org.elasticsearch.search.suggest.Suggest.Suggestion;
import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry.Option;
import org.elasticsearch.xcontent.ParseField;
import org.elasticsearch.xcontent.Text;
import org.elasticsearch.xcontent.XContentBuilder;

import java.io.IOException;
Expand Down
Loading