Querying literary agents is, among other things, an information problem. The pool of agents actively accepting submissions at any given time numbers in the hundreds. Their preferences shift โ genres they were actively seeking six months ago may now be closed; new agents at established agencies may be building lists in exactly your category. Response times vary from two weeks to eight months. Some agents send personalised rejections; most don't. Some are meticulous about acknowledging receipt; others operate a strict no-response-means-no policy.
Navigating this without a system is how writers end up querying the wrong agents, losing track of where they've submitted, or misjudging whether a non-response is a rejection or simply a long wait. QueryTracker is the closest thing to a centralised intelligence layer that querying writers have โ a combination of agent database, community research tool, and personal submission tracker. This guide covers how to use it from the beginning.
"A word after a word after a word is power."
โ Margaret Atwood, "Spelling," True Stories, 1981
The query process is, in its way, an extension of that accumulation: one submission after another, each one a small act of commitment to the work. QueryTracker is the tool that keeps that accumulation organised and purposeful.
Create a free account
QueryTracker operates on a freemium model. A free account gives access to the agent database, basic search filters, community notes, and the personal submission tracker โ which is enough for most writers, particularly in the early stages of querying. A premium account (paid) unlocks additional agent statistics, the ability to sort and filter query data more granularly, and some enhanced tracking features.
Starting with the free account is sensible. The core functionality is genuinely useful without upgrading, and it's worth understanding what the platform offers before deciding whether the premium features justify the cost for your particular querying strategy.
Registration requires an email address and a username. The username is public โ it appears alongside any comments or ratings you contribute to agent profiles โ so choose something you'd be comfortable with appearing in the community. Many writers use a pen name or a partial name.
Search the agent database
QueryTracker's agent search is the starting point for building a query list. The search interface allows filtering by genre, age category (adult, young adult, middle grade, children's), and whether an agent is currently open to queries. Genre tagging on the platform is based on what agents have publicly stated they represent, updated by the QueryTracker team and community.
A few practical notes on searching:
Cast wide first, then narrow. Start with a genre filter and review the full list of agents who represent it before applying additional filters. Some agents represent adjacent genres under a broader category (e.g., an agent who lists "literary fiction" may also be actively seeking upmarket commercial fiction or narrative nonfiction). Reading individual profiles clarifies what "literary fiction" means to each agent.
Cross-reference with Publishers Marketplace and agent wishlists. QueryTracker is a strong starting point, but it isn't the only source. Many agents maintain wishlists on Manuscript Wishlist (manuscriptwishlist.com) with more granular detail about what they're currently seeking. Publishers Marketplace (a paid service, though a free account shows some information) lists recent deals by agent, which reveals what an agent has actually sold โ a more reliable signal of their current interests than stated genre preferences alone.
Before adding any agent to your query list, visit their agency's actual website and read their current submission guidelines. QueryTracker's data is community-maintained and generally reliable, but agents change their preferences, temporarily close to queries, or move agencies โ and the most current information is always on the agent's own page.
Read the agent statistics
Each agent's QueryTracker profile includes statistics derived from query data submitted by users: the number of queries logged, the percentage that resulted in a request for more material (a partial or full manuscript), the percentage that resulted in offers of representation, and the median response time for both queries and requested materials.
These statistics are useful but require careful interpretation.
Response time medians are the most reliable figure. If the median response time for queries is 8 weeks and you're at week 12 with no reply, that's meaningful data โ it suggests either a slower period, a query that passed without response (some agents do not reply to queries they're not pursuing), or a technical issue with your submission. If you're at week 4, the median tells you it's simply early.
Request rates are less reliable than they appear. The percentage of queries resulting in requests is influenced heavily by the self-selection of who queries a given agent. An agent with a 20% request rate on QueryTracker may be receiving a disproportionate number of queries from well-prepared writers who did careful research before submitting โ the same writers who would be logging their data on QueryTracker. Treat request rates as rough context rather than a reliable predictor of your own query's likelihood of success.
Sample sizes matter. Statistics derived from 30 logged queries are considerably less reliable than those from 300. Check the sample size before drawing conclusions from any particular figure.
QueryTracker statistics reflect only the queries that users have chosen to log. They are a community-sourced sample, not a complete record of an agent's query activity. An agent who has responded to 500 queries may only have 80 logged on QueryTracker. The platform's value is in trend data and community context, not in statistically rigorous benchmarks.
Read and contribute community notes
Below the statistics on each agent profile, QueryTracker displays community notes โ short comments left by writers who have queried that agent, describing their experience. These notes are some of the platform's most valuable and most misused features.
Community notes are most useful for factual, verifiable information: whether the agent sent a form rejection or a personalised one; whether a request for a full manuscript came with any specific instructions; how accurately the stated response timeline matched the actual experience. Notes that contain this kind of concrete, specific information help other writers calibrate their expectations and plan follow-up timing.
Notes that offer characterisations of agent behaviour, infer personality from rejection language, or make claims about what an agent is "really like" based on a single interaction are less useful and should be read with appropriate scepticism. A rejection is not a comprehensive data point about a person.
If you query an agent and receive a response, contributing a note to their profile โ factual, brief, focused on what would actually help another writer โ is a meaningful contribution to the community that makes the platform valuable.
Use the submission tracker
QueryTracker's submission tracker is, for many writers, the feature they use most consistently throughout the querying process. It allows you to log each query submission โ noting the agent, agency, submission date, how you submitted (email, QueryManager form, another portal), what materials you sent, and any response received.
The tracker gives you a dashboard view of all active submissions: which agents are still pending, which have responded, and what the outcome was. For a writer managing simultaneous queries to twenty or thirty agents across a period of months, this view is practically essential. Keeping the same information in a personal spreadsheet is perfectly workable, but the QueryTracker version has the advantage of pulling in agent response time data alongside your personal record โ making it easier to identify which non-responses are statistically within normal range and which are overdue.
Log every submission immediately. The tracker is only useful if it reflects your actual query activity. The habit of logging a submission the same day you send it prevents the accumulation of a backlog that becomes difficult to reconstruct accurately later.
Log responses promptly too. When a rejection arrives (or, occasionally, a request), update the relevant entry. This keeps your active query list accurate and contributes to the platform's community statistics โ which depend on writers closing the loop on submissions they've logged.
What QueryTracker doesn't do
A few things worth being clear about before relying too heavily on any single tool in the querying process.
QueryTracker does not vet agents for legitimacy. An agent appearing in its database is not an endorsement of that agent's credentials, track record, or professional conduct. Before querying any agent, it's worth cross-referencing with Writer Beware โ the most reliable source for flagging predatory agents, fraudulent agencies, and known publishing scams. QueryTracker can tell you response times; Writer Beware can tell you whether an agent is worth querying at all.
QueryTracker also doesn't include every agent in the industry. Some agents โ particularly those at large agencies with full lists, or those who don't accept unsolicited queries โ may not appear in the database or may have limited data. A thorough agent research process uses QueryTracker as one source among several, not as a comprehensive directory.
Used with those caveats in mind, QueryTracker is one of the genuinely useful free tools available to querying writers โ a combination of community intelligence and personal organisation that makes a complex, opaque process meaningfully more navigable.
The query process begins when the manuscript is ready โ not before. The Creator's Hearth daily prompt is there for the writing stage, and the guide to hiring a developmental editor is worth reading before you decide the manuscript is finished.