using SSD persistent disk. Instance 2 took a bit longer, and finished in 15 mins. The total output including uids was 1.3GB.
Note that `stw_ram_mb` is based on the memory usage perceived by Golang. It currently doesn't take into account the memory usage by RocksDB. So, the actual usage is higher.
### Server
Now that the data is loaded, you can run the Dgraph servers. To serve the 3 shards above, you can follow the [same steps as here](#multiple-distributed-instances).
Now you can run GraphQL queries over freebase film data like so:
...
...
@@ -260,33 +240,6 @@ The support for GraphQL is [very limited right now](https://github.com/dgraph-io
You can conveniently browse [Freebase film schema here](http://www.freebase.com/film/film?schema=&lang=en).
There're also some schema pointers in [README](https://github.com/dgraph-io/benchmarks/blob/master/data/README.md).
#### Query Performance
With the [data loaded above](#loading-performance) on the same hardware,
it took **218ms to run** the pretty complicated query above the first time after server run.
Note that the json conversion step has a bit more overhead than captured here.
```json
{
"server_latency":{
"json":"37.864027ms",
"parsing":"1.141712ms",
"processing":"163.136465ms",
"total":"202.144938ms"
}
}
```
Consecutive runs of the same query took much lesser time (80 to 100ms), due to posting lists being available in memory.
```json
{
"server_latency":{
"json":"38.3306ms",
"parsing":"506.708µs",
"processing":"32.239213ms",
"total":"71.079022ms"
}
}
```
## Queries and Mutations
You can see a list of [sample queries here](https://discuss.dgraph.io/t/list-of-test-queries/22).
Dgraph also supports mutations via GraphQL syntax.