When we’re done, we examine the number of wins and losses to make an informed decision on whether or not to go ahead with this change. We manually evaluate each change in a browser using keyboard shortcuts. Our workflow is to use the interactive evaluation capabilities of the HTML output mode of api-diff. Which can output a colorized console diff:Īpi-diff can be installed as a command-line tool with npm install -g If your servers require an API key, you’ll want to create a config file for it. # trim our results down to just the first entry under the addresses key` \ ignored_fields bbox geometry attribution timestamp \ # ignore all fields named these things in computing our diff \ # extra options to append to every query \ # remap csv column "query" to query param "text" \ input_csv ~/RadarCode/geocode-acceptance-tests/input/addresses.csv \ # csv input, use headings as query parameter keys \ # these can be any accessible http server, sometimes we run against our prod index \ We run api-diff with a command that looks something like this: api-diff \ When updating the data in our index, we are usually expecting a steady increase in quality and coverage, and are mostly on the look out for surprising regressions, often caused by changes in data formats. When making a ranking change, there is usually a tradeoff between improved queries and some unintended losses, so we can use api-diff to provide a more quantitative answer as to whether or not the improvements outweigh the losses enough to proceed with the change. In both cases, we are looking to see what changes occurred in the search results.
0 Comments
Leave a Reply. |