Right now we're using 256MB, which really is quite small for Elasticsearch. It seems there are many setups out there with 32GB of RAM just for Elasticsearch.
As a result, from time to time we're seeing error like this in integation tests:
java.util.concurrent.CompletionException: org.hibernate.search.util.common.SearchException: HSEARCH400588: The operation failed due to the failure of the call to the bulk REST API.
Caused by: org.hibernate.search.util.common.SearchException: HSEARCH400588: The operation failed due to the failure of the call to the bulk REST API.
Caused by: org.hibernate.search.util.common.SearchException:
HSEARCH400007: Elasticsearch request failed: HSEARCH400090: Elasticsearch response indicates a failure.
Request: POST /_bulk with parameters {refresh=true}
Response: 429 'Too Many Requests' with body
{
"error": {
"root_cause": [
{
"type": "circuit_breaking_exception",
"reason": "[parent] Data too large, data for [\u003chttp_request\u003e] would be [263306110/251.1mb], which is larger than the limit of [255013683/243.1mb], real usage: [263305344/251.1mb], new bytes reserved: [766/766b], usages [request\u003d0/0b, fielddata\u003d0/0b, in_flight_requests\u003d766/766b, accounting\u003d13976/13.6kb]",
"bytes_wanted": 263306110,
"bytes_limit": 255013683,
"durability": "PERMANENT"
}
],
"type": "circuit_breaking_exception",
"reason": "[parent] Data too large, data for [\u003chttp_request\u003e] would be [263306110/251.1mb], which is larger than the limit of [255013683/243.1mb], real usage: [263305344/251.1mb], new bytes reserved: [766/766b], usages [request\u003d0/0b, fielddata\u003d0/0b, in_flight_requests\u003d766/766b, accounting\u003d13976/13.6kb]",
"bytes_wanted": 263306110,
"bytes_limit": 255013683,
"durability": "PERMANENT"
},
"status": 429
}
Caused by: org.hibernate.search.util.common.SearchException: HSEARCH400090: Elasticsearch response indicates a failure.