Article Data Extractor API

This API is perfect for those that want to retrieve structured data from an article on the web. Only with the URL will you receive an extensive list of information. Try it out!

About the API:

With Article Data Extractor you will be able to scrape and retrieve all the relevant information from any article you find on the web. Forget about ads, banners and other unessential parts as well. Only receive all the data related to the article of your choice. 

 

What your API receives and what your API provides (input/output)?

Article Data Extractor takes only 1 parameter — the URL of any article or blog. It scrapes and extracts any relevant information such as title, text, published time, media links, and many more. Save time and receive all this data structured so you can filter, query, and store all the information that the web has for you. 

 

What are the most common uses cases of this API?

This API is perfect for any marketing agency or any news platform that wants to retrieve the most important information from an article. This is the author's name, the text from the article itself, and do not forget about TAGS. With this API all the tags embedded in the article will be available. 

Also, this is great to compare what images are using other blogs or news forums in different articles. 

So, if you have a large collection of articles, you will be able to filter by author's name, by tag elements, or even by published dates. This API will help you to have your articles better organized. }

 

Are there any limitations with your plans?

Besides API call limitations per month:

  • Basic: 3 requests per second.
  • Pro: 5 requests per second. 

API Documentation

Endpoints


Version 2.0 will allow you to parse any article of your choice. 

Extract main article and metadata from a news entry or blog post.

 
 


                                                                            
GET https://zylalabs.com/api/35/article+data+extractor+api/1880/article+data+extractor
                                                                            
                                                                        

Article Data Extractor - Endpoint Features

Object Description
url [Required] The URL of the article.
Test Endpoint

API EXAMPLE RESPONSE

       
                                                                                                        
                                                                                                                                                                                                                            {"error":0,"message":"Article extraction success","data":{"url":"https://www.nature.com/articles/s41746-024-01275-6","title":"A data-driven framework for identifying patient subgroups on which an AI/machine learning model may underperform","description":"Let \\({\\mathcal{W}}\\) be the sample space of W and let P be the distribution of the evaluation data.\nFor example, if the evaluation dataset contains no male patients over age 70, then neither will subsets in \\({{\\mathcal{U}}}_{\\alpha }\\).\nWhile ideally we would like to consider subgroups not represented in the evaluation dataset, SA is statistically constrained to only uncovering unknown subgroups that are present within the evaluation dataset.\nModel performance diagnostics are a suite of techniques that model developers can use to determine the correctability of poor model performance in a subgroup.\nThis combined, multi-site evaluation dataset contained 60,998 patient encounters and was used in the application of the proposed evaluation framework....","links":["https://www.nature.com/articles/s41746-024-01275-6"],"image":"https://media.springernature.com/m685/springer-static/image/art%3A10.1038%2Fs41746-024-01275-6/MediaObjects/41746_2024_1275_Fig1_HTML.png","content":"<div><h3 class=\"c-article__sub-heading\" id=\"Sec11\">Stability analysis</h3><p>Hidden stratification is difficult to detect because it is characterized by a disparity between a model&#8217;s average performance and its performance on sufficiently rare, but a priori unknown, subgroups. Stability analysis is a powerful tool for surfacing these types of subgroups because it allows one to test the uniformity of a model&#8217;s performance across a range of different data distributions<a title=\"Subbaswamy, A., Adams, R. &amp; Saria, S. Evaluating model robustness and stability to dataset shift. In International Conference on Artificial Intelligence and Statistics 2611&#8211;2619 (PMLR, 2021).\" href=\"/articles/s41746-024-01275-6#ref-CR34\" id=\"ref-link-section-d31017338e1373\">34</a>. Thus, by defining a set of data distributions that vary by subgroup prevalence, a model evaluator can use stability analysis to determine if there exist data distributions within the set on which the model has problematically low performance compared to its average performance on the full evaluation data distribution.</p><p>The first step of the AFISP framework is to perform a stability analysis to identify the largest worst-performing subset of the evaluation data on which the model&#8217;s performance is below a user-defined threshold. This subset is then further analyzed to determine concrete subgroups that are present within the identified data subset.</p><h4 class=\"c-article__sub-heading c-article__sub-heading--small\" id=\"Sec12\">Identifying worst-case subsets</h4><p>We use a stability analysis framework developed by ref. <a title=\"Subbaswamy, A., Adams, R. &amp; Saria, S. Evaluating model robustness and stability to dataset shift. In International Conference on Artificial Intelligence and Statistics 2611&#8211;2619 (PMLR, 2021).\" href=\"/articles/s41746-024-01275-6#ref-CR34\" id=\"ref-link-section-d31017338e1387\">34</a> to perform the first step of AFISP. We will refer to this method as SA (stability analysis).</p><p>In the first step, our goal is to identify the subset of the full evaluation dataset of a particular size, and defined by a particular set of user-selected features, on which the model performs worst. For example, if a user is interested in evaluating performance across demographic subgroups, they might allow the subgroup definition to depend on demographic characteristics such as age, sex, and race. Formally, for an evaluation dataset <i>D</i> consisting of input features <i>X</i> and prediction label <i>Y</i>, the user specifies a set of features <i>W</i> &#8834; {<i>X</i>, <i>Y</i>} and a subset fraction <i>&#945;</i> (subset size measured as a fraction of the dataset). Then, following SA<a title=\"Subbaswamy, A., Adams, R. &amp; Saria, S. Evaluating model robustness and stability to dataset shift. In International Conference on Artificial Intelligence and Statistics 2611&#8211;2619 (PMLR, 2021).\" href=\"/articles/s41746-024-01275-6#ref-CR34\" id=\"ref-link-section-d31017338e1416\">34</a>, we define an <i>uncertainty set</i> made up of all possible subsets of size <i>&#945;</i> defined based solely on features in <i>W</i>.</p>\n                    <h3 class=\"c-article__sub-heading\" id=\"FPar1\">Definition 1</h3>\n                    <p>(Uncertainty set). Let \\({\\mathcal{W}}\\) be the sample space of <i>W</i> and let <i>P</i> be the distribution of the evaluation data. Then, define the <b>uncertainty set</b> as \\({{\\mathcal{U}}}_{\\alpha }=\\{{\\mathcal{S}}\\subseteq {\\mathcal{W}}:P(W\\in S)=\\alpha \\}\\) or the collection of subsets of values and features in \\({\\mathcal{W}}\\) with probability <i>&#945;</i> under <i>P</i>. Note that \\({\\mathcal{S}}\\) denotes subsets of the sample space of <i>W</i>, and \\(S\\in {\\mathcal{S}}\\) represents an individual such subset.</p>\n                  <p>Considering demographic subgroups again, a user might select <i>W</i>&#8201;=&#8201;age, race, sex, and <i>&#945;</i>&#8201;=&#8201;20%. The corresponding uncertainty set \\({{\\mathcal{U}}}_{\\alpha }\\) would contain subsets of the demographics space such that 20% of samples are included in each subset. Across the subsets, the way other variables relate to variables in <i>W</i> does not change. For example, considering a subset of Black men under 30, the comorbidity distribution in this subgroup would be unchanged. However, if we also included comorbidities in <i>W</i>, the uncertainty set could hypothetically contain a subset that includes only Black men under 30 who have either sickle cell disease or type 1 diabetes. Note that when using SA, one cannot select subsets that are not represented in the evaluation distribution. For example, if the evaluation dataset contains no male patients over age 70, then neither will subsets in \\({{\\mathcal{U}}}_{\\alpha }\\). While ideally we would like to consider subgroups not represented in the evaluation dataset, SA is statistically constrained to only uncovering unknown subgroups that are present within the evaluation dataset.</p><h4 class=\"c-article__sub-heading c-article__sub-heading--small\" id=\"Sec13\">Finding the worst-performing subset in <p class=\"mathjax-tex\">\\({{\\mathcal{U}}}_{\\alpha }\\)</p>\n                           </h4><p>Once the uncertainty set has been defined, SA finds the subset in \\({{\\mathcal{U}}}_{\\alpha }\\) that has the <i>worst</i> average performance under a model \\({\\mathcal{M}}\\) and for a particular loss function <i>&#8467;</i>. Formally, this is defined by the following optimization problem:</p><p class=\"c-article-equation__content\"><p class=\"mathjax-tex\">$$\\mathop{\\sup }\\limits_{S\\in {{\\mathcal{U}}}_{\\alpha }}{{\\mathbb{E}}}_{P}[\\ell ({\\mathcal{M}}(X),Y)| W\\in S].$$</p></p><p class=\"c-article-equation__number\">\n                    (1)\n                </p><p>In words, SA tries to find <i>S</i> in \\({{\\mathcal{U}}}_{\\alpha }\\) that maximizes the expected loss on the distribution of the evaluation data subset <i>S</i>. This is referred to as the <i>worst-performing subset</i> of size <i>&#945;</i>. For details regarding how this optimization problem is solved, we refer readers to the SA paper<a title=\"Subbaswamy, A., Adams, R. &amp; Saria, S. Evaluating model robustness and stability to dataset shift. In International Conference on Artificial Intelligence and Statistics 2611&#8211;2619 (PMLR, 2021).\" href=\"/articles/s41746-024-01275-6#ref-CR34\" id=\"ref-link-section-d31017338e1934\">34</a>.</p><p>The worst-performing subset will likely be a mixture of different subgroups, with the prevalence of each of the constituent subgroups different from its prevalence in the evaluation dataset. As an example, suppose Black females under 18 make up 8% of patients in the evaluation dataset. It is possible for the worst-performing subset of size 0.1 to consist of 60% Black females under 18. By examining the makeup of worst-performing subsets as a function of size, we can determine subgroup characteristics that are associated with poor model performance.</p><h4 class=\"c-article__sub-heading c-article__sub-heading--small\" id=\"Sec14\">Identifying a data subset with poor performance</h4><p>Solving the optimization problem in Equation (<a href=\"/articles/s41746-024-01275-6#Equ1\">1</a>) provides evaluators a way to study how model performance decays as the evaluation population distribution is gradually changed adversarially (with respect to the target model). Given a set of shift characteristics <i>W</i>, a subset size of <i>&#945;</i>&#8201;=&#8201;1 corresponds to model performance on the full evaluation dataset. As <i>&#945;</i> decreases and approaches 0, the worst-performing shifted data distribution is allowed to be more and more different from the original data distribution (smaller subsets can differ more from the overall population than larger subsets). Thus, for a fixed choice of <i>W</i>, we can plot a performance stability curve of the worst-case shift performance for a grid of values \\(\\alpha \\in \\left(0,1\\right]\\).</p><p>Applying SA for a grid of <i>&#945;</i> values, we created the stability curve presented in the Results section in Fig. <a href=\"/articles/s41746-024-01275-6#Fig2\">2</a>. The performance of the target model (in blue) decays as the subset fraction decreases. Using the performance threshold defined by the baseline model&#8217;s performance, we identified the largest subset fraction that produces performance worse than the threshold to be <i>&#945;</i>&#8201;=&#8201;0.1. The performance threshold can be determined in several ways, including using a known tolerance (i.e., the model is not suitable if its performance is below a certain value) ...
                                                                                                                                                                                                                    
                                                                                                    

Article Data Extractor - CODE SNIPPETS


curl --location --request GET 'https://zylalabs.com/api/35/article+data+extractor+api/1880/article+data+extractor?url=https://www.thestartupfounder.com/use-this-data-extractor-api-to-get-article-data-from-mathrubhumi/' --header 'Authorization: Bearer YOUR_API_KEY' 


    

API Access Key & Authentication

After signing up, every developer is assigned a personal API access key, a unique combination of letters and digits provided to access to our API endpoint. To authenticate with the Article Data Extractor API REST API, simply include your bearer token in the Authorization header.
Headers
Header Description
Authorization [Required] Should be Bearer access_key. See "Your API Access Key" above when you are subscribed.

Simple Transparent Pricing

No long term commitments. One click upgrade/downgrade or cancellation. No questions asked.

πŸš€ Enterprise

Starts at
$ 10,000/Year


  • Custom Volume
  • Custom Rate Limit
  • Specialized Customer Support
  • Real-Time API Monitoring

Customer favorite features

  • βœ”οΈŽ Only Pay for Successful Requests
  • βœ”οΈŽ Free 7-Day Trial
  • βœ”οΈŽ Multi-Language Support
  • βœ”οΈŽ One API Key, All APIs.
  • βœ”οΈŽ Intuitive Dashboard
  • βœ”οΈŽ Comprehensive Error Handling
  • βœ”οΈŽ Developer-Friendly Docs
  • βœ”οΈŽ Postman Integration
  • βœ”οΈŽ Secure HTTPS Connections
  • βœ”οΈŽ Reliable Uptime

The Article Data Extractor API is designed to extract relevant information from articles or blogs by providing the URL of the desired webpage. It scrapes and retrieves data such as the article's title, text, published time, media links, and more. The API aims to save time by delivering structured data that can be easily filtered, queried, and stored for further use.

The Article Data Extractor API can extract various types of information from articles or blogs. This includes the article's title, main text content, published time, media links (such as images or videos embedded within the article), and potentially other metadata associated with the article.

The accuracy of data extraction depends on factors such as the structure and quality of the webpage, as well as the consistency of its layout and formatting. The API employs scraping techniques to retrieve information, and its accuracy may vary based on these factors. However, it is designed to provide reliable and relevant data from the provided article or blog URL.

No, at the moment batch requests are not supported. You will have to make one API call per article that you want to extract the data from.

The extracted data from the articles or blogs is typically returned in a structured format, such as JSON. This makes it easier to work with the data programmatically, as you can access specific fields and properties. The API organizes the extracted information in a structured manner, allowing you to filter, query, and store the data as per your requirements.

Zyla API Hub is like a big store for APIs, where you can find thousands of them all in one place. We also offer dedicated support and real-time monitoring of all APIs. Once you sign up, you can pick and choose which APIs you want to use. Just remember, each API needs its own subscription. But if you subscribe to multiple ones, you'll use the same key for all of them, making things easier for you.

Prices are listed in USD (United States Dollar), EUR (Euro), CAD (Canadian Dollar), AUD (Australian Dollar), and GBP (British Pound). We accept all major debit and credit cards. Our payment system uses the latest security technology and is powered by Stripe, one of the world’s most reliable payment companies. If you have any trouble paying by card, just contact us at [email protected]

Additionally, if you already have an active subscription in any of these currencies (USD, EUR, CAD, AUD, GBP), that currency will remain for subsequent subscriptions. You can change the currency at any time as long as you don't have any active subscriptions.

The local currency shown on the pricing page is based on the country of your IP address and is provided for reference only. The actual prices are in USD (United States Dollar). When you make a payment, the charge will appear on your card statement in USD, even if you see the equivalent amount in your local currency on our website. This means you cannot pay directly with your local currency.

Occasionally, a bank may decline the charge due to its fraud protection settings. We suggest reaching out to your bank initially to check if they are blocking our charges. Also, you can access the Billing Portal and change the card associated to make the payment. If these does not work and you need further assistance, please contact our team at [email protected]

Prices are determined by a recurring monthly or yearly subscription, depending on the chosen plan.

API calls are deducted from your plan based on successful requests. Each plan comes with a specific number of calls that you can make per month. Only successful calls, indicated by a Status 200 response, will be counted against your total. This ensures that failed or incomplete requests do not impact your monthly quota.

Zyla API Hub works on a recurring monthly subscription system. Your billing cycle will start the day you purchase one of the paid plans, and it will renew the same day of the next month. So be aware to cancel your subscription beforehand if you want to avoid future charges.

To upgrade your current subscription plan, simply go to the pricing page of the API and select the plan you want to upgrade to. The upgrade will be instant, allowing you to immediately enjoy the features of the new plan. Please note that any remaining calls from your previous plan will not be carried over to the new plan, so be aware of this when upgrading. You will be charged the full amount of the new plan.

To check how many API calls you have left for the current month, look at the β€˜X-Zyla-API-Calls-Monthly-Remaining’ header. For example, if your plan allows 1000 requests per month and you've used 100, this header will show 900.

To see the maximum number of API requests your plan allows, check the β€˜X-Zyla-RateLimit-Limit’ header. For instance, if your plan includes 1000 requests per month, this header will display 1000.

The β€˜X-Zyla-RateLimit-Reset’ header shows the number of seconds until your rate limit resets. This tells you when your request count will start fresh. For example, if it displays 3600, it means 3600 seconds are left until the limit resets.

Yes, you can cancel your plan anytime by going to your account and selecting the cancellation option on the Billing page. Please note that upgrades, downgrades, and cancellations take effect immediately. Additionally, upon cancellation, you will no longer have access to the service, even if you have remaining calls left in your quota.

You can contact us through our chat channel to receive immediate assistance. We are always online from 8 am to 5 pm (EST). If you reach us after that time, we will get back to you as soon as possible. Additionally, you can contact us via email at [email protected]

To let you experience our APIs without any commitment, we offer a 7-day free trial that allows you to make API calls at no cost during this period. Please note that you can only use this trial once, so make sure to use it with the API that interests you the most. Most of our APIs provide a free trial, but some may not support it.

After 7 days, you will be charged the full amount for the plan you were subscribed to during the trial. Therefore, it’s important to cancel before the trial period ends. Refund requests for forgetting to cancel on time are not accepted.

When you subscribe to an API trial, you can make only 25% of the calls allowed by that plan. For example, if the API plan offers 1000 calls, you can make only 250 during the trial. To access the full number of calls offered by the plan, you will need to subscribe to the full plan.

 Service Level
100%
 Response Time
1,316ms

Category:


Related APIs