Need help understanding API
-
I know what information I need to pull... I know I need APIs to do it... I just don't know how to pull it or where. I have tools like Screaming Frog, Scrapebox, SEMRush, Moz, Majestic, etc. I need to find out how to type in a query and pull the top 10 ranking specs like DA, PA, Root Domains, Word Count, Trust Flow, etc.
Here is a screenshot of info I manually pulled...
https://screencast.com/t/H1q5XccR8 (I can't hyperlink it... it's giving me an error)
How do I auto pull this info??
-
There's no single API that does all of this but you can chain a couple of them together to get what you want. As Tawny pointed out, it's a technical task and you may need the help of a web developer.
As you have access to SEMRush, you can use their "organic results" API call which, given the keyword you're interested in, will return the top ranking URLs. You can see the documentation specific to that call here. So that gets you from your starting point (being interested in a query) to having the top URLs. You can limit the number of rows returned by the SEMRush API—sounds like you'd only want the first 10 rows (i.e. the top 10 results for the keyword).
Now taking that list you can send it to the Moz API's UrlMetrics call. This will give you back DA, PA, Trust Flow, and so on, for each URL.
Neither tool will tell you the word count. If you need to calculate that, you'll have to crawl the pages somehow. It depends whether you really need to completely automate everything. If "semi-automation" is good enough, I'd suggest that your script, after fetching the top ranking URLs from SEMRush, writes them out to a CSV as well. Then you can use Screaming Frog in list mode to crawl all of the URLs listed in the CSV. So everything would be automated except for gathering the word counts. You'd have to stitch together the results from your Screaming Frog crawl with the data you had back from Moz.
If everything must be automated, and you really need word count or other on-page information, your script will also need to crawl the pages. And for this you'll certainly need a developer familiar with technologies like Selenium, for crawling and scraping web pages. In almost all use cases, that's overkill, so I'd suggest focussing on the SEMRush and Moz APIs for now.
-
Hey there!
Tawny from Moz's Help Team here.
Our API doesn't contain all those metrics — what we're able to pull in through the Mozscape API is the same kind of data you'd be able to get through Open Site Explorer, but in bulk. You can read about all the different kinds of data you can collect with the API over here, in our Help Hub pages: https://moz.com/help/guides/moz-api/mozscape/api-reference
You can read more about how to use it and get started from this page: https://moz.com/help/guides/moz-api/mozscape/getting-started-with-mozscape/create-and-manage-your-account
The API is a pretty technical tool, so you may need a web developer's help to interpret the responses you get. If you have any more questions, feel free to reach out to us at help@moz.com and we'll do our best to answer any questions you might have.
Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to cost is calculated in Moz API for fetching only backlinks
I am unable to understand which particular API used to get the backlinks. And how cost is calculated. Suppose I Hit the API one time it gave me 100 backlinks of a particular domain. Does that mean i used 100 rows?
API | | maxfun00070 -
How can I get "Date First Seen","Date Last Seen" and "Date Lost" from the API?
"Date First Seen","Date Last Seen" and "Date Lost" are columns in the CSV exported from LinkExplorer's Inbound Links page. How do I get that data from the API?
API | | StevePoul1 -
How frequently is the Search Volume update for each keyword? API for Search Volume?
Subject pretty much says it all... How frequently is the Search Volume update for a given keyword? Is there an API call that would include keyword-specific Search Volume for one or more keywords? Thank you.
API | | ToddLevy0 -
What is the metric to check link state and link type for MOZ API ?
Kindly do not share the website url related to url and link metrics,instead of mention the correct metric for link state and link type. Thanks!
API | | rogerdavid0 -
API - Row Limit Question
Hi, I'm new to using Moz, and have just got a "Low volume" API account set up. My question is, because ive not yet reached my maximum "Rows per month" limit, what behaviour happens when i reach it? Do i get an error code, if so what, and whats the status code. If not, does my account keep downloading the rows and i get charged extra (in accordance with the cost of the additional rows)? Or is the whole additional rows think like a bolt on? Basically i want to make sure i dont get charged extra each month, and i need the status code returned to handled this in my app. I couldnt see anything explicit in the documents. Cheers
API | | MattHopkins0 -
Can the API Filter Links with Certain Anchor Text?
I am trying to get all links that have a certain strings in their anchor text: I am using the python library: https://github.com/seomoz/SEOmozAPISamples/blob/master/python/lsapi.py Looking at the documentation, it says I can get the normalized anchor text by using the bit flag 8 for the LinkCols value: https://moz.com/help/guides/moz-api/mozscape/api-reference/link-metrics So I tried this: links = l.links('example.com', scope='page_to_domain', sort='domain_authority', filters=['external'], sourceCols = lsapi.UMCols.url, linkCols=8) But it doesn't return the expected 'lnt' response field or anything similar to the anchor text. How do I get the anchor text on the source URLs? I also tried 10 for the linkCols value, to get all the bit flags in the lf field as well as the anchor text. In both instances (and even with different variations of targetCols & sourceCols), this is all the fields that are returned: 'lrid', 'lsrc', 'luuu', 'uu', 'luupa', 'ltgt'
API | | nbyloff0 -
API - Internal Links to page and related metrics
Hi dear moz Team! Currently I´m building a Java application accessing your API. But there are some metrics I urgently need which I can´t get out of the API until now: The total number of internal links to a page The total number of internal links to a page with partial anchor text match MozRank passed by all internal links w. part. match anchor text (would be nice) For example, if I try this by your links endpoint, my idea was: http://lsapi.seomoz.com/linkscape/links/http%3A%2F%2Fwww.jetztspielen.de%2F?AccessID=..
API | | pollierer
&Expires=..
&Signature=..
&Scope=domain_to_page
&Filter=internal
&Sort=domain_authority
&SourceCols=4 (or any other value)
&SourceDomain=www.jetztspielen.de
&Offset=0
&Limit=50 If I try this, the API says: {"status": "400", "error_message": "Cannot set a source domain when filtering for internal links."} Is there any way to get the data I need by your API endpoints? I´m currently writing my master thesis and it is very important to me to solve this somehow. Thank you very much in advance! Best, Andreas Pollierer1 -
Are EC2 servers blocked from using the Moz API?
Hi I've created a script that I'd like to use to check a list of domains using the Moz API. It works totally fine on my local machine. However, when I run it from my EC2 instance, it fails every time. To be specific, the response is always empty (the response is an empty json array) when the request is sent from EC2. Is, for some reason, EC2 blocked by the Moz API? Many thanks for your help, Andrew
API | | csandrew0