/ Anticipatory Design: Improving Search UX using Query Analysis and Machine Cues

This paper was refereed by Weave's peer reviewers.


This article looks at how inferred and contextual aspects of a search query can offer new ways of thinking about the search results page. Data mining for user context requires using techniques to understand the intentions behind search queries and the physical/network locations of our users. By applying the residual machine cues inherent to the search act and using semantic query analysis, we can improve user experience to anticipate user needs and introduce personal context. This is anticipatory design in practice. In the article, I define the components of anticipatory design, consider the privacy implications of anticipatory systems, examine how our search interfaces as a primary interaction model lend themselves to anticipatory design, and look at how inferred and contextual cues can be brought into a search prototype to improve the search user experience.


Our evolution into a searching culture requires a fundamental shift in how we think about the user experience, well beyond simply accommodating search engine optimization.
–Vanessa Fox, Former Product Manager, Google Search Console (Fox, 2012)

When Larry Page and Sergey Brin engineered the “backrub” algorithm and the concept of an index of web pages (Page et al., 1999), they couldn’t have known how the act of searching would become the dominant model for interacting with the web. In the short time since the search engine’s invention, we have become a searching culture. People search to check facts, learn concepts, get directions, find other people, decide on what to eat, learn how to cook what they want to eat, etc. This interaction pattern is a primary means of navigation for the twenty-first century.

For libraries and cultural institutions, our evolution into a searching culture brings with it a number of opportunities, including rethinking the user experience of search. To date, library discovery systems have focused on search and retrieval around explicit queries and matching string patterns. They can and have matched on these explicit strings to much success. Take for example a typical search interaction: a user searches for “Yellowstone National Park” and a set of results is returned based on that string pattern match coupled with relevancy and occurrences of that pattern of words as measured against the complete search index. For every query, matches are chosen by comparing items in the index to the defined, literal words of the query. We can keep doing this in library search as we have for the past few decades. But, there is another possibility here if we turn toward a more nuanced, semantic interpretation of search queries.[1] Speech acts are made up not only of the words being used—the message—but also the context. Search interactions can also be viewed this way, where the message is the search query, but the context for the search is largely ignored. Consider this simple formula and example of the search act.

search query = literal aspect + inferred aspect
“yellowstone national park”
[search query]
=“yellowstone national park”
[literal aspect]
+iPhone user, on MT highway
[tacit, inferred aspect]

I believe that the most interesting work involving the future of search is going to revolve around figuring out how to use the inferred aspects of searching, not just the literal search, to enhance and improve the search act for users.[2]

Within libraries and cultural heritage institutions, our interest in “being like Google” has focused more on the unified set of results that Google queries allow, but search and search result pages are evolving into something else entirely. Search is a calorie counter. Search is a person directory. It is the home page of the web. Once we start to unpack the contextual bits and pieces around a search act or a query, there are all kinds of possibilities for enhancing discovery and our search user interfaces.

Figure 1. Google user interface inviting natural language question, “Calories in cranberry sauce?”
Figure 1. Google user interface inviting natural language question, “Calories in cranberry sauce?”
Figure 2. Google search engine results page showing images, contact information, latest news, Wikipedia data, etc.
Figure 2. Google search engine results page showing images, contact information, latest news, Wikipedia data, etc.

Studies are starting to confirm this idea of search as a student’s or a researcher’s home page. A 2010 study of higher education website usage, reported on the findings of an environmental scan showing that:

"Search engines continue to dominate, topping the list of electronic sources most used to find online content (93 percent), followed closely by Wikipedia (88 percent). The key difference in usage between search engines and Wikipedia is the frequency—75 percent of students who use search engines do so daily, compared to 20 percent of those who use Wikipedia." (OCLC, 2011, p. 52)[3]

In a more recent study from Ithaka S+R, Roger Schonfeld (2015) writes about how researcher’s expectations are being set by consumer internet services and that our library discovery services account for a “relatively minor share of search-driven discovery” and do not provide “even a substantial minority of content accesses to major content platforms.” In contrast, he writes “...Google and Google Scholar are comparatively important discovery starting points. Each provides not only search but also various types of anticipatory discovery, including reading recommendations and keyword, author, and citation alerts” (Schonfeld, 2015). By focusing on the contextual bits and pieces around a search act, we might improve our users’ experience of the library website. Khodabandelou, Hug, Deneckere, and Salinesi (2013) came up with the term “intention [data] mining” when speaking of the goals as one interprets these contextual bits and pieces of a search query. Intention mining is a useful frame for the activity we will conduct as we work through how to apply anticipatory design to our search interfaces.[4]

Given our searching culture and expectations, I will look at a series of research questions and possibilities for library search interactions in this article. First, I will establish the history of anticipatory design and how it has and will shape our library search interfaces. Second, I’ll take a practical look at an implementation of anticipatory design principles that use intention mining in a generic, library search prototype. And finally, I will look at the implications of this anticipatory model in relation to privacy and utility in hopes of understanding where we can introduce value to library search interactions and how tolerant our users are in accepting enhancements that depend on inference and knowledge of their contextual patterns and behavior.

Literature Review

Since search has become the primary interaction pattern on the web, understanding what a typical search act entails is the first step in thinking about how to improve on search design. In the terms of the search act, we have a system that allows a query, interprets the query, and then returns [potential] answers. Within this systems model, one can imagine a role where anticipating the needs of the searcher is possible. But, before integrating anticipatory design into this model, we need to know exactly what anticipatory design is. In his coining of the term, Aaron Shapiro (2015) calls anticipatory design: “...design that is one step ahead of you.” Shapiro moves on to offer a more detailed definition:

“Anticipatory design is fundamentally different: decisions are made and executed on behalf of the user. The goal is not to help the user make a decision, but to create an ecosystem where a decision is never made—it happens automatically and without user input. The design goal becomes one where we eliminate as many steps as possible and find ways to use data, prior behaviors, and business logic to have things happen automatically, or as close to automatic as we can get” (Shapiro, 2015).

Shapiro coined the term for the web age, but there are previous studies in the computer science, philosophy, information science, and interaction design literature which discuss anticipation and the prediction of future actions as essential and inherent components of systems design. Loet Leydesdorff (2004) emphasizes that: “In order to generate and process meaning, a communication system has to entertain a model of itself. A system which contains a model of itself can function in an anticipatory mode.” Leydesdorff’s formulation is a later version of what Robert Rosen, the natural scientist and pioneer systems design theorist, introduced in his Anticipatory Systems noting that: “anticipatory behavior is one in which a change of state in the present occurs as a function of some predicted future state and that the agency through which the prediction is made must be, in the broadest sense, a model” (1985, p. 8). At any given moment, a systems model built with anticipatory design principles is speculating about user needs and attempting to fill in the blanks correctly.

Given these ideas of speculation and prediction, anticipatory design is not without its critics. Many have pointed to how the model could lead to less empathy for the mental models of our users. A number of researchers (Dubois, 1998; Jansen, et al., 2008; Kruschwitz et al., 2013; Zamenopoulos & Alexiou, 2007) note that when we enter into modes of prediction; we introduce constraints to understanding. And these constraints have led to some (Busch, 2015; Sene, 2013) calling anticipatory design practice, “presumptive design.” Others have moved beyond the presumptive label to look at the challenges to privacy inherent to anticipatory and predictive systems design. In her consideration of the limits of personalization, Maria Andersen (2014) suggests that anticipatory design practice has similarities to the “uncanny valley” of human robotics in seeming to cross the line into familiarity that machines can’t be expected to have. She discusses the ways that personalization can become invasive, and notes that for personalization to be accepted widely, it is necessary to find ways where users trust the designs and accept the added value personalization affords without feelings that a system knows too much about them. In writing about “Search 2.0” and commercial search engines, Michael Zimmer traces the implications of these systems that know too much:

“In their quest for Search 2.0, web search engines have gained the ability to track, capture, and aggregate a wealth of personal information stemming from the increased flow of personal information made available by growing use and reliance on Web 2.0-based applications. The full effects and consequences of the emerging Search 2.0 infrastructure are difficult to predict, but potentially include the exercise of disciplinary power against users, the panoptic sorting of users, and the general invisibility and inescapability of Search 2.0’s impact on users’ online activities” (Zimmer, 2008).

In her reflection on Shapiro’s foundational essay, Anne Quito (2015) points out that “anticipatory design presents new ethical checkpoints for designers and programmers behind the automation, as well as for consumers. Can we trust a system to safeguard our personal data from hackers and marketers—or does privacy become a moot concern?” In libraries and cultural institutions, privacy is never a moot concern, but there is some potential middle ground here. Michael Schofield (2015) alludes to a form of anticipatory design that might meet the privacy requirements of libraries while maintaining or establishing user trust when he writes about, “The low-fat flavor of anticipatory design without the personal-data part” in which “...context inferred from device or browser information is usually opt-in by default, and this would do most of the heavy lifting.” In the next section, we’ll take a closer look at these middle ground methods that don’t require the ability to “track, capture, and aggregate a wealth of personal information” (Zimmer, 2008) to discern context and user intention.

Anticipatory Design Using Query Analysis and Machine Cues

In his account of the rise of Google, John Battelle (2006) introduces the concept of a “database of intentions” to analyze the role that search engines play as the collectors of humanity’s curiosity, exploration, and expressed desires. Battelle is speaking about the search terms that we use and how these terms are recorded, analyzed, and then mined for insights by Google. The analysis of intention that Battelle mentions is where we can start to apply anticipatory design to our search systems. There is an evidentiary residue to the search act and our analysis of this residue allows us to introduce predictions, context, and relevance into the search interaction. Even a simple, anonymous recording of search terms can add value as I noted in an earlier article for the Code4Lib Journal on “making patron data work harder in applying user search query terms as browsable access points” within a search system (Clark, 2008). Interpretation and anticipation of contextual user data enables even more potential improvements within a search user interface, such as:

  • Watching for semantic cues within search queries to suggest/show facets related to generic questions about library collections and services.
  • Determining device of access to establish the need for more directional or locational facets.
  • Using global variables to establish client IP and pre-search location identity to show a facet that invites a user to run a local search query based on their physical location.
  • Determining the day of the week using global variables to suggest/show facets around featured services for the day and hours for the day.
  • Determining time of day using global variables to suggest/show facets that might feature relevant library news or specials matching the times.

In this section, we’ll unpack some specific methods for determining the inferred aspects of the search act or a query that can allow us to improve search UX.

Cues from within the Literal String

Words within a string can give us a clue as to intent. Appendix 1 includes an example list of words that might indicate a “natural language,” question-type query. Think of the “five Ws and One H” of journalistic inquiry—who, what, where, when, why, how—and how one could use the intentional semantics of these words to monitor whether a question is being asked within a search query. The next step would be to build recognition (with code) for this type of query and watch for the pattern match on these words.[5] In terms of the search user interface, if a match is made, a different facet is presented to the user. This new facet might be focused on knowledgebase, factual, and/or FAQ content. As you can see the goal here is recognizing context.[6] In this case, the context is derived from intention mining search query cue words to discern if a person is asking a question or searching based on a traditional keyword string. In the context of a real-world example, imagine someone typed “weather in bozeman” into a search box. Figure 3 shows an example of the type of information that could appear in a result facet if “weather” was picked out of that explicit search string as a query requiring a contextual answer.

Figure 3. Possible weather display facet based on pattern match of “weather” in search query.
Figure 3. Possible weather display facet based on pattern match of “weather” in search query.

Appendix 2 shows how one might make this facet appear using two different code samples. Intent pattern matching gives us a few options to display contextual information, but there are additional cues from the act of calling and loading a web page that we can use.

Cues from the Machine

A missing piece in our methods so far has been a lack of trying to understand context or intention in time and space. There are lots of means to try to pick up these cues. One of the first places to start is with the HTTP headers found inside of a web browser request when a link is clicked and a page is returned. Figure 4 shows screenshot of a search interface with HTTP header data and various global system information showing for demonstration purposes.[7]

Figure 4. Screenshot of desktop view of search interface with HTTP header and global system variables showing.
Figure 4. Screenshot of desktop view of search interface with HTTP header and global system variables showing.

These HTTP headers[8] and related system information provide bits of data that we can use to infer context. Just consider for a moment what I can glean from the above HTTP header information.

  1. User-agent—device
  2. User-agent—operating system
  3. Web browser
  4. Web browser rendering engine
  5. Language
  6. Previous sites visited (inside the cookie information)

More specifically, here are some inferences from the HTTP header above:

  1. Desktop
  2. Mac OS X 10.7.5
  3. Google Chrome Web Browser
  4. AppleWebKit Browser Rendering Engine
  5. English
  6. Google

One can also use certain server-side variables to pull out additional information. So, in addition to the Headers information, we have information that includes:

  1. Page URL
  2. Referring URL
  3. Client IP address
  4. Timestamp

And even further, we can find a number of pieces of information using client-side JavaScript as a page is rendered in a web browser. Among the information that we can mine from a page loaded into the browser using the native JavaScript window.navigator() or Date() methods:

  1. Screen height/width/color depth
  2. Platform and operating system
  3. A list of installed browser plugins
  4. Time zone
  5. Whether a user allows cookies or tracking

With all of these pieces on the page and available as variables to a programming script, we can start to think about some enhancements we might bring into a search interface.

Figure 5. Screenshot of search interface with HTTP header and GLOBAL system variables showing from mobile device view.
Figure 5. Screenshot of search interface with HTTP header and GLOBAL system variables showing from mobile device view.

Some of the immediate enhancements might include:

  1. Using the timestamp to determine day of week and showing a note about research assistance hours or library hours for that day.
  2. Using the timestamp to determine time of day and if it is around 8 a.m. or 2 p.m. suggest stopping by the cafe for a coffee.
  3. Using user agent to infer device, and ask if refinement around facts of place would be helpful.
  4. Using the IP address to do a preliminary lookup about geolocation.
  5. Applying the geolocation to check local weather and then suggest an action based on that data point. For example, “Hey, it is sunny out. Stop by the cafe for a lemonade.”
  6. Coupling the geolocation data point and the seasonal data point to suggest additional reading. For example, “It’s summertime and the living is easy. Here are this summer’s best beach or camping reads...”
  7. Using the referring URL value to provide a set of search suggestions for a returning, internal, or new user.

Take for example if we find a match within the user agent value on “ipod” or “iphone” If this match occurs, we could introduce a location facet or a streamlined user interface designed for a tablet or smartphone. All of these enhancements are just based on the user-agent device and operating system. See Appendix 3 for an example of PHP code that would build those enhancements. We can also perform a preliminary scoping on the location using the client IP address. There are many services that can do an IP address to location lookup.[9] By passing the IP address to an API like this:



You can get a returned set of structured data as JSON (see fig. 6).

Figure 6. Screenshot of JSONLint view of values returned from Telize API.
Figure 6. Screenshot of JSONLint view of values returned from Telize API.

Using a number of the returned values city (line 9) or the longitude (line 2) and latitude (line 3), we can now suggest a new facet within the search user interface. Imagine an invitation in the sidebar, that asks, “Hey, it looks like you are in Somewhere, USA. Do you want to filter these results based on your location?”[10]

Applying Anticipatory Design to the Search Experience

In the interest of bringing all of these possibilities into focus, I’m going to work through how a live search prototype uses these semantic and machine cues in applying anticipatory design principles to improve the user experience. The anticipatory search prototype[11] works best when applied to generic, large-scale search settings such as unified discovery systems, a library catalog, a library or university website search, etc. But, there is no reason that with the properly designed facets or refinements that the anticipatory design model couldn’t work within even a smaller digital library collection search. The anticipatory design implementation is compelling because when I speak of search experience, I am talking about any time a user could have a “Did you mean?” anticipatory design treatment. At times, it is a conversation and a mediation moment with similarities to the reference interview. As we mentioned earlier, the model for a search interaction is fairly simple: someone asks a question (query) and a system tries to provide a series of answers (search results). In this section, we’ll try to illustrate how anticipation can be applied to these interactions and systems.

The first design goal is providing a clean and intuitive interface that communicates the primary action. The prototype borrows from the design conventions of commercial search engine interfaces in showing a single search box with a call to action (fig. 7).

Figure 7. Screenshot of the inert search interface.
Figure 7. Screenshot of the inert search interface.

After query initiation, we start to see the potential benefits of anticipatory design models with the introduction of facets and filters to draw out the intention behind the query (fig. 8).

Figure 8. Screenshot of initial facets for a standard query.
Figure 8. Screenshot of initial facets for a standard query.

The goal is to show only necessary facets until the user has selected and communicated a more complex informational need which requires an additional refinement. And there are other anticipatory nuances of the prototype that aren’t part of Figure 8. For example, smaller query set results (< 30) have a smaller number of facets that appear (fig. 11 actually shows this limited number of facets based on a < 30 query set result). The reasoning is that if a user’s query is already refined enough to produce a smaller set of results, it is not necessary to add complex facets and additional browse points. The system has been “successful.” Conversely, a larger query set (> 30) might indicate ambiguity in searcher intent or a broad query so extra facets appear to aid in refining and specifying the query. Building on the idea of only introducing complexity when a user requires or asks for it, note how the facets remain closed (fig. 8) until a user interaction opens the information into view (fig. 9).

Figure 9. Screenshot of initial facets for a standard query with facet open.
Figure 9. Screenshot of initial facets for a standard query with facet open.

And there are still other ways to build anticipation into the system design model. As we noted in the previous section, watching for intentional cues, like natural language questions, in the query itself can lead to a different set of facets focused on facilitation moments such as connecting with a librarian or being able to look at the organizational FAQ knowledgebase to see if a similar question has been answered. Figure 10 demonstrates how a natural language question query makes a question facet appear along with an invitation to talk to a librarian.

Figure 10. Screenshot of initial facets for a conversational query.
Figure 10. Screenshot of initial facets for a conversational query.

Even further, we can look to introduce local context by situating and locating our user within a certain time and place (fig. 11).

Figure 11. Screenshot of facet noting time in semester and pushing a library browse collection.
Figure 11. Screenshot of facet noting time in semester and pushing a library browse collection.

This facet brings an offer to search the library browsing collection based on recognition of where we are in the semester calendar, but there are all kinds of possibilities here. Depending on time in the semester, we could look to surface writing help or citation and research services. Depending on time of day, we could push hours or an event that is happening later tonight. On a lighter note, we can even offer a cup of coffee at the library cafe based on the current weather. The prototype is an exercise in building a dialogue with a user and trying to realize anticipatory design possibilities.

Future for Anticipatory Design

I would argue that anticipatory design within our interfaces and spaces has a strong future. In the context of search, its future may be the brightest. Any time we have a routine pattern of interaction, such as search, there is an opportunity to streamline and introduce new patterns of use or modes of access and discovery. A challenge for us will be learning where these enhancements truly add value for our users. Even more importantly, the challenge will be in understanding how privacy and the rights of individuals can be preserved in the face of predictive, anticipatory systems. You can see this question playing out in the earlier literature review, when authors were pointing to the “uncanny valley” (Andersen, 2015) or “presumptive design” (Busch, 2015; Sene, 2013)—or even the rhetorical question asking if we can “trust the system” (Quito, 2015). These questions about privacy are essential to ask. And they might be even more important for us to solve if we are to compete and participate in the searching culture of our time.[12] In many ways, this is the crux of this research and the reason to continue to try to understand anticipatory design in the library context. It is a huge question and one that can involve our whole organization: public services librarians with expertise in search and learning behaviors; developers and designers looking to provide recommendation systems; instructional librarians and staff who are planning learning spaces in anticipation of use; even library administrators asking for systems and spaces that inscribe and encode library values such as privacy and helpful intermediation. With a nod toward these broader implications of the anticipatory design model, the next steps for this search UX project will be more extensive user testing and analysis with very specific goals. First, we are interested in observational and task-based testing to see how searchers work through our system. Findings here will be applied to refine and finalize the search interaction model that we have. Second, there is a need to understand the intentions and attitudes of our users more clearly. We are looking to conduct user interviews and live walk-throughs of the anticipatory search interface to determine user’s attitudes related to predictive systems and where anticipation crosses the line into feelings of surveillance. And finally, we’ll look to analyze our search query logs to understand what are the most common queries and when natural language queries are appearing with the goal of applying this data to provide an even smarter interface.


Search and discovery remain a core service of libraries. We are working in many ways to make sure our indexed data is discoverable, and we can continue to refine our search UX for our local search interfaces. In this article, I have presented a way forward by introducing the idea and process behind enhancements of context and location within our search interfaces. Mining these contextual bits and pieces requires our using data techniques that look to understand the intent of search queries and the physical and network locations of our users. As we have noted, this practice does creep into issues of privacy for our users, but it is a practice that, if done responsibly, will lead to our next possibilities for search—inferred context and search as landing page. There are efforts to use this information for good. Libraries and cultural institutions can take heed of projects such as “Am I Unique?” where browser fingerprinting is applied as a point of education. To move forward into systems that can apply predictive analytics, recommendation facets, and text mining, we need to figure out how to use this information responsibly and win the confidence of our users. The additional path forward is to gather this data transparently, communicating our goals, and framing it as an improved service for our users as they work through our library systems.


Appendix 1: List of natural language question pattern words in a programming array()

$cueWords = array( "about", "above", "across", "after", "afterwards", "again", "against", "all", "almost", "alone", "became", "because", "become", "becomes", "becoming", "been", "before", "beforehand", "behind", "being", "below", "beside", "besides", "between", "beyond", "by", "call", "can", "cannot", "cant", "could", "couldnt", "cry", "describe", "do", "either", "except", "few", "fill", "find", "found", "from", "front", "full", "further", "get", "give", "go", "had", "has", "hasnt", "have", "how", "however", "hundred", "if", "in", "indeed", "interest", "into", "is", "keep", "might", "mine", "more", "moreover", "most", "mostly", "move", "much", "must", "my", "myself", "name", "never", "nevertheless", "nor", "not", "nothing", "now", "nowhere", "of", "off", "often", "on", "once", "one", "only", "onto", "or", "other", "others", "otherwise", "our", "ours", "ourselves", "out", "over", "own", "part", "per", "perhaps", "please", "put", "rather", "see", "seem", "seemed", "seeming", "seems", "should", "show", "side", "since", "so", "some", "somehow", "someone", "something", "sometime", "sometimes", "somewhere", "still", "such", "system", "take", “temp”, “temperature”, "than", "that", "the", "their", "them", "themselves", "then", "thence", "there", "thereafter", "thereby", "therefore", "therein", "thereupon", "these", "they", "this", "those", "though", "three", "through", "throughout", "thru", "thus", "to", "together", "too", "top", "toward", "towards", "under", "until", "up", "upon", "us", "very", "time", "were", “weather”, "what", "whatever", "when", "whence", "whenever", "where", "whereafter", "whereas", "whereby", "wherein", "whereupon", "wherever", "whether", "which", "while", "whither", "who", "whoever", "whole", "whom", "whose", "why", "will", "with", "within", "without", "would", "yet", "you", "your", "yours", "yourself", "yourselves");

Appendix 2: PHP and Python code to process the natural language question pattern words in a search query

We can use an if conditional operator code expression for this logic using the $cueWords array we defined above would look like:

// PHP example
$query = 'weather in bozeman';
$qToken = strtok($query, "+");
if (in_array(strtolower($qToken), $cueWords)) {
echo "Current Conditions: ";
# python example
cueWords = ["about", "above", ... , "weather", ...]
query = 'weather in bozeman'
if query in cueWords:
print Current Conditions:

Appendix 3: PHP code that could provide enhancements around user agent strings

$cueUserAgentWords = array( "android", ..., "iphone", "ipod",...)
$userAgent = ‘Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.122 Safari/537.36’;
if (in_array(strtolower($query), $cueUserAgentWords)) {
echo "Are you interested in local facts about this place?";


    1. Note that I’m taking my cues here from Tom Anthony’s excellent post on the “new query model,” http://moz.com/blog/from-keywords-to-contexts-the-new-query-model.return to text

    2. I have spoken about locational context before in 2011, https://www.lib.montana.edu/~jason/talks/cil2011-libapp-location.pdf, and have even prototyped an app that tries to convey context about a place, https://www.lib.montana.edu/~jason/files/geolocate/. return to text

    3. A digest of these OCLC findings is also available at http://www.oclc.org/content/dam/oclc/reports/2010perceptions/collegestudents.pdf.return to text

    4. More specifically, Khodabandelou et al. express their definition as “The main objective of Intention Mining is to extract sequences of actors’ activities from an event log to evaluate and predict the actors’ intentions related to those activities” (2013, p. 1).return to text

    5. We are doing pattern matching on likely question-type words, there are advanced tools that can do more nuanced analysis around classification, tokenization, stemming, tagging, parsing, and semantic reasoning. See Python’s NLTK library for one of these examples, http://www.nltk.org/. return to text

    6. Another good source for query intent cues is your list of smartphone voice commands - https://techranker.net/how-to-use-siri-siri-commands-list-questions-to-ask-siri-app/. return to text

    7. I’m using PHP in these examples, but many of these HTTP header values and $_SERVER global variables can be derived from mod_python if that is your programming language of choice. http://modpython.org/live/mod_python-3.3.1/doc-html/pyapi-mprequest-mem.html.)return to text

    8. For a list of HTTP header fields, see http://en.wikipedia.org/wiki/List_of_HTTP_header_fields.return to text

    9. For example, Telize, http://www.telize.com/, an open-source GeoIP JSON API is one that allows you to host your own IP resolving API.return to text

    10. At this point, we can apply the HTML5 geolocation API to verify our latitude and longitude values, https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation. return to text

    11. The prototype is available at https://www.lib.montana.edu/~jason/files/search-ux/ and the code for the prototype is available at https://github.com/jasonclark/search-ux.return to text

    12. The search prototype does have a “Privacy?” page that discusses the intentions behind the project and provides some transparency by listing the sources used to anticipate user actions, https://www.lib.montana.edu/~jason/files/search-ux/privacy.html.return to text