Extractor Web API

Extrae texto limpio, sin formato y markdown de páginas web para análisis, documentación o visualización con esta versátil API.

Acerca de la API:  

La API Web Extractor es una herramienta robusta diseñada para recuperar texto limpio y estructurado de páginas web. Con sus dos puntos finales especializados, permite a los usuarios extraer contenido significativo sin anuncios, elementos de navegación u otros detalles irrelevantes. Además, su punto final de conversión a markdown convierte páginas web en documentos markdown estructurados, ideales para bloguear, gestión de contenido e integración con plataformas basadas en markdown. Compatible con páginas estáticas y dinámicas, esta API se adapta a estructuras web complejas para garantizar resultados consistentes y de alta calidad.

Documentación de la API

Endpoints


Para usar este endpoint, envíe una solicitud con la URL de la página web y reciba el texto limpio extraído del contenido de esa página.



                                                                            
POST https://zylalabs.com/api/5660/extractor+web+api/7369/recuperar+texto+limpio.
                                                                            
                                                                        

Recuperar texto limpio. - Características del Endpoint

Objeto Descripción
Cuerpo de la Solicitud [Requerido] Json
Probar Endpoint

RESPUESTA DE EJEMPLO DE LA API

       
                                                                                                        
                                                                                                                                                                                                                            {"response":"Spark Basics\nSuppose we have a web application hosted in an application orchestrator like kubernetes. If load in that particular application increases then we can horizontally scale our application simply by increasing the number of pods in our service.\nNow let’s suppose there is heavy compute operation happening in each of the pods. Then there will be certain limit upto which these services can run because unlike horizontal scaling where you can have as many numbers of machines as required, there is limit for vertical scaling because you can’t have unlimited ram and cpu cores for each of the machines in a cluster. Distributed Computing removes this limitation of vertical scaling by distributing the processing across cluster of machines. Now, a group of machines alone is not powerful, you need a framework to coordinate work across them. Spark does just that, managing and coordinating the execution of tasks on data across a cluster of computers. The cluster of machines that Spark will use to execute tasks is managed by a cluster manager like Spark’s standalone cluster manager, Kubernetes, YARN, or Mesos.\nSpark Basics\nSpark is distributed data processing engine. Distributed data processing in big data is simply series of map and reduce functions which runs across the cluster machines. Given below is python code for calculating the sum of all the even numbers from a given list with the help of map and reduce functions.\nfrom functools import reduce\na = [1,2,3,4,5]\nres = reduce(lambda x,y: x+y, (map(lambda x: x if x%2==0 else 0, a)))\nNow consider, if instead of a simple list, it is a parquet file of size in order of gigabytes. Computation with MapReduce system becomes optimized way of dealing with such problems. In this case spark will load the big parquet file into multiple worker nodes (if the file doesn’t support distributed storage then it will be first loaded into driver node and afterwards, it will get distributed across the worker nodes). Then map function will be executed for each task in each worker node and the final result will fetched with the reduce function.\nSpark timeline\nGoogle was first to introduce large scale distributed computing solution with MapReduce and its own distributed file system i.e., Google File System(GFS). GFS provided a blueprint for the Hadoop File System (HDFS), including the MapReduce implementation as a framework for distributed computing. Apache Hadoop framework was developed consisting of Hadoop Common, MapReduce, HDFS, and Apache Hadoop YARN. There were various limitations with Apache Hadoop like it fell short for combining other workloads such as machine learning, streaming, or interactive SQL-like queries etc. Also the results of the reduce computations were written to a local disk for subsequent stage of operations. Then came the Spark. Spark provides in-memory storage for intermediate computations, making it much faster than Hadoop MapReduce. It incorporates libraries with composable APIs for machine learning (MLlib), SQL for interactive queries (Spark SQL), stream processing (Structured Streaming) for interacting with real-time data, and graph processing (GraphX).\nSpark Application\nSpark Applications consist of a driver process and a set of executor processes. The driver process runs your main() function, sits on a node in the cluster. The executors are responsible for actually carrying out the work that the driver assigns them. The driver and executors are simply processes, which means that they can live on the same machine or different machines.\nThere is a SparkSession object available to the user, which is the entrance point to running Spark code. When using Spark from Python or R, you don’t write explicit JVM instructions; instead, you write Python and R code that Spark translates into code that it then can run on the executor JVMs.\nSpark’s language APIs make it possible for you to run Spark code using various programming languages like Scala, Java, Python, SQL and R.\nSpark has two fundamental sets of APIs: the low-level “unstructured” APIs (RDDs), and the higher-level structured APIs (Dataframes, Datasets).\nSpark Toolsets\nA DataFrame is the most common Structured API and simply represents a table of data with rows and columns. To allow every executor to perform work in parallel, Spark breaks up the data into chunks called partitions. A partition is a collection of rows that sit on one physical machine in your cluster.\nIf a function returns a Dataframe or Dataset or Resilient Distributed Dataset (RDD) then it is a transformation and if it doesn’t return anything then it’s an action. An action instructs Spark to compute a result from a series of transformations. The simplest action is count.\nTransformation are of types narrow and wide. Narrow transformations are those for which each input partition will contribute to only one output partition. Wide transformation will have input partitions contributing to many output partitions.\nSparks performs a lazy evaluation which means that Spark will wait until the very last moment to execute the graph of computation instructions. This provides immense benefits because Spark can optimize the entire data flow from end to end.\nSpark-submit\nReferences\n- https://spark.apache.org/docs/latest/\n- spark: The Definitive Guide by Bill Chambers and Matei Zaharia"}
                                                                                                                                                                                                                    
                                                                                                    

Recuperar texto limpio. - CÓDIGOS DE EJEMPLO


curl --location --request POST 'https://zylalabs.com/api/5660/extractor+web+api/7369/recuperar+texto+limpio.' --header 'Authorization: Bearer YOUR_API_KEY' 

--data-raw '{
  "url": "https://techtalkverse.com/post/software-development/spark-basics/"
}'

    

Para utilizar este endpoint, envíe una solicitud con la URL de la página web y reciba el contenido convertido al formato markdown de esa página.



                                                                            
POST https://zylalabs.com/api/5660/extractor+web+api/7370/extracci%c3%b3n+de+contenido+de+texto.
                                                                            
                                                                        

Extracción de contenido de texto. - Características del Endpoint

Objeto Descripción
Cuerpo de la Solicitud [Requerido] Json
Probar Endpoint

RESPUESTA DE EJEMPLO DE LA API

       
                                                                                                        
                                                                                                                                                                                                                            {"response":"---\ntitle: Spark Basics\nurl: https://techtalkverse.com/post/software-development/spark-basics/\nhostname: techtalkverse.com\ndescription: Suppose we have a web application hosted in an application orchestrator like kubernetes. If load in that particular application increases then we can horizontally scale our application simply by increasing the number of pods in our service.\nsitename: techtalkverse.com\ndate: 2023-05-01\ncategories: ['post']\n---\n# Spark Basics\n\nSuppose we have a web application hosted in an application orchestrator like kubernetes. If load in that particular application increases then we can horizontally scale our application simply by increasing the number of pods in our service.\n\nNow let’s suppose there is heavy compute operation happening in each of the pods. Then there will be certain limit upto which these services can run because unlike horizontal scaling where you can have as many numbers of machines as required, there is limit for vertical scaling because you can’t have unlimited ram and cpu cores for each of the machines in a cluster. **Distributed Computing** removes this limitation of vertical scaling by distributing the processing across cluster of machines.\nNow, a group of machines alone is not powerful, you need a framework to\ncoordinate work across them. Spark does just that, managing and coordinating the execution of tasks on data across a cluster of computers. The cluster of machines that Spark will use to execute tasks is managed by a cluster manager like Spark’s standalone cluster manager, Kubernetes, YARN, or Mesos.\n\n## Spark Basics\n\nSpark is distributed data processing engine. Distributed data processing in big data is simply series of map and reduce functions which runs across the cluster machines. Given below is python code for calculating the sum of all the even numbers from a given list with the help of map and reduce functions.\n\n```\nfrom functools import reduce\na = [1,2,3,4,5]\nres = reduce(lambda x,y: x+y, (map(lambda x: x if x%2==0 else 0, a)))\n```\n\n\nNow consider, if instead of a simple list, it is a parquet file of size in order of gigabytes. Computation with MapReduce system becomes optimized way of dealing with such problems. In this case spark will load the big parquet file into multiple worker nodes (if the file doesn’t support distributed storage then it will be first loaded into driver node and afterwards, it will get distributed across the worker nodes). Then map function will be executed for each task in each worker node and the final result will fetched with the reduce function.\n\n## Spark timeline\n\nGoogle was first to introduce large scale distributed computing solution with **MapReduce** and its own distributed file system i.e., **Google File System(GFS)**. GFS provided a blueprint for the **Hadoop File System (HDFS)**, including the MapReduce implementation as a framework for distributed computing. **Apache Hadoop** framework was developed consisting of Hadoop Common, MapReduce, HDFS, and Apache Hadoop YARN. There were various limitations with Apache Hadoop like it fell short for combining other workloads such as machine learning, streaming, or interactive SQL-like queries etc. Also the results of the reduce computations were written to a local disk for subsequent stage of operations. Then came the **Spark**. Spark provides in-memory storage for intermediate computations, making it much faster than Hadoop MapReduce. It incorporates libraries with composable APIs for\nmachine learning (MLlib), SQL for interactive queries (Spark SQL), stream processing (Structured Streaming) for interacting with real-time data, and graph processing (GraphX).\n\n## Spark Application\n\n**Spark Applications** consist of a driver process and a set of executor processes. The **driver** process runs your main() function, sits on a node in the cluster. The **executors** are responsible for actually carrying out the work that the driver assigns them. The driver and executors are simply processes, which means that they can live on the same machine or different machines.\n\nThere is a **SparkSession** object available to the user, which is the entrance point to running Spark code. When using Spark from Python or R, you don’t write explicit JVM instructions; instead, you write Python and R code that Spark translates into code that it then can run on the executor JVMs.\n**Spark’s language APIs** make it possible for you to run Spark code using various programming languages like Scala, Java, Python, SQL and R.\nSpark has two fundamental sets of APIs: the **low-level “unstructured” APIs** (RDDs), and the **higher-level structured APIs** (Dataframes, Datasets).\n\n## Spark Toolsets\n\nA **DataFrame** is the most common Structured API and simply represents a table of data with rows and columns. To allow every executor to perform work in parallel, Spark breaks up the data into chunks called partitions. A **partition** is a collection of rows that sit on one physical machine in your cluster.\n\nIf a function returns a Dataframe or Dataset or Resilient Distributed Dataset (RDD) then it is a **transformation** and if it doesn’t return anything then it’s an **action**. An action instructs Spark to compute a result from a series of transformations. The simplest action is count.\n\nTransformation are of types narrow and wide. **Narrow transformations** are those for which each input partition will contribute to only one output partition. **Wide transformation** will have input partitions contributing to many output partitions.\n\nSparks performs a **lazy evaluation** which means that Spark will wait until the very last moment to execute the graph of computation instructions. This provides immense benefits because Spark can optimize the entire data flow from end to end.\n\n## Spark-submit\n\n## References\n\n- https://spark.apache.org/docs/latest/\n- spark: The Definitive Guide by Bill Chambers and Matei Zaharia"}
                                                                                                                                                                                                                    
                                                                                                    

Extracción de contenido de texto. - CÓDIGOS DE EJEMPLO


curl --location --request POST 'https://zylalabs.com/api/5660/extractor+web+api/7370/extracci%c3%b3n+de+contenido+de+texto.' --header 'Authorization: Bearer YOUR_API_KEY' 

--data-raw '{
  "url": "https://techtalkverse.com/post/software-development/spark-basics/"
}'

    

Clave de Acceso a la API y Autenticación

Después de registrarte, a cada desarrollador se le asigna una clave de acceso a la API personal, una combinación única de letras y dígitos proporcionada para acceder a nuestro endpoint de la API. Para autenticarte con el Extractor Web API simplemente incluye tu token de portador en el encabezado de Autorización.
Encabezados
Encabezado Descripción
Autorización [Requerido] Debería ser Bearer access_key. Consulta "Tu Clave de Acceso a la API" arriba cuando estés suscrito.

Precios Simples y Transparentes

Sin compromiso a largo plazo. Mejora, reduce o cancela en cualquier momento. La Prueba Gratuita incluye hasta 50 solicitudes.

🚀 PLAN CORPORATIVO A MEDIDA

Comienza en
$ 10.000/Año


  • Volumen Personalizado
  • Límite de solicitudes personalizado
  • Soporte al Cliente Especializado
  • Monitoreo de API en Tiempo Real

Funciones favoritas de los clientes

  • ✔︎ Paga Solo por Solicitudes Exitosas
  • ✔︎ Prueba Gratuita de 7 Días
  • ✔︎ Soporte Multilenguaje
  • ✔︎ Una Clave API, Todas las APIs.
  • ✔︎ Panel de Control Intuitivo
  • ✔︎ Manejo de Errores Integral
  • ✔︎ Documentación Amigable para Desarrolladores
  • ✔︎ Integración con Postman
  • ✔︎ Conexiones HTTPS Seguras
  • ✔︎ Tiempo de Actividad Fiable

Extractor Web API FAQs

The Web Extractor API is designed to extract clean, structured text and markdown from web pages, allowing users to analyze, document, or display content without irrelevant details like ads or navigation elements.

The API is compatible with both static and dynamic web pages, adapting to complex web structures to ensure consistent and high-quality results during content extraction.

The main features include two specialized endpoints for extracting clean text and converting web pages into structured markdown documents, making it suitable for blogging and content management.

Yes, the Web Extractor API is designed to handle complex web structures, ensuring that it retrieves meaningful content accurately, regardless of the page layout.

The markdown conversion endpoint formats the extracted content into structured markdown documents, which can be easily integrated with markdown-based platforms for seamless content management.

The "Retrieve Clean Text" endpoint returns unformatted text extracted from a web page, while the "Text Content Extract" endpoint returns structured markdown, including metadata like title, URL, description, and categories.

The "Retrieve Clean Text" response contains the clean text content. The "Text Content Extract" response includes fields such as title, URL, hostname, description, sitename, date, and categories, providing comprehensive context for the extracted content.

The "Retrieve Clean Text" response is a simple JSON object with a "response" key containing the text. The "Text Content Extract" response is a more complex JSON object with multiple keys, including metadata and the markdown content, structured for easy integration.

The "Retrieve Clean Text" endpoint provides the main textual content of a web page, while the "Text Content Extract" endpoint offers both the text and additional metadata, such as the page's title, URL, and categories, enhancing content context.

Users can customize requests by specifying different URLs for extraction. The API processes the provided URL to return the relevant clean text or markdown, allowing flexibility in content sourcing.

Typical use cases include content analysis, documentation, and blogging. Users can extract clean text for research or convert web pages into markdown for easy integration into content management systems or blogs.

The Web Extractor API employs algorithms to parse and extract relevant content while filtering out ads and navigation elements, ensuring high accuracy in the extracted data. Continuous updates to the extraction logic help maintain quality.

Users can expect the "Retrieve Clean Text" endpoint to return coherent paragraphs of text, while the "Text Content Extract" will yield structured markdown with clear metadata. Both outputs are designed to be clean and easy to use in various applications.

General FAQs

Zyla API Hub es como una gran tienda de APIs, donde puedes encontrar miles de ellas en un solo lugar. También ofrecemos soporte dedicado y monitoreo en tiempo real de todas las APIs. Una vez que te registres, puedes elegir qué APIs quieres usar. Solo recuerda que cada API necesita su propia suscripción. Pero si te suscribes a varias, usarás la misma clave para todas, lo que hace todo más fácil para ti.

Los precios se muestran en USD (dólar estadounidense), EUR (euro), CAD (dólar canadiense), AUD (dólar australiano) y GBP (libra esterlina). Aceptamos todas las principales tarjetas de débito y crédito. Nuestro sistema de pago utiliza la última tecnología de seguridad y está respaldado por Stripe, una de las compañías de pago más confiables del mundo. Si tienes algún problema para pagar con tarjeta, contáctanos en [email protected]


Además, si ya tienes una suscripción activa en cualquiera de estas monedas (USD, EUR, CAD, AUD, GBP), esa moneda se mantendrá para suscripciones posteriores. Puedes cambiar la moneda en cualquier momento siempre que no tengas suscripciones activas.

La moneda local que aparece en la página de precios se basa en el país de tu dirección IP y se proporciona solo como referencia. Los precios reales están en USD (dólar estadounidense). Cuando realices un pago, el cargo aparecerá en tu estado de cuenta en USD, incluso si ves el monto equivalente en tu moneda local en nuestro sitio web. Esto significa que no puedes pagar directamente en tu moneda local.

Ocasionalmente, un banco puede rechazar el cargo debido a sus configuraciones de protección contra fraude. Te sugerimos comunicarte con tu banco primero para verificar si están bloqueando nuestros cargos. También puedes acceder al Portal de Facturación y cambiar la tarjeta asociada para realizar el pago. Si esto no funciona y necesitas más ayuda, por favor contacta a nuestro equipo en [email protected]

Los precios se determinan mediante una suscripción recurrente mensual o anual, dependiendo del plan elegido.

Las llamadas a la API se descuentan de tu plan en base a solicitudes exitosas. Cada plan incluye una cantidad específica de llamadas que puedes realizar por mes. Solo las llamadas exitosas, indicadas por una respuesta con estado 200, se contarán en tu total. Esto asegura que las solicitudes fallidas o incompletas no afecten tu cuota mensual.

Zyla API Hub funciona con un sistema de suscripción mensual recurrente. Tu ciclo de facturación comenzará el día en que compres uno de los planes de pago, y se renovará el mismo día del mes siguiente. Así que recuerda cancelar tu suscripción antes si quieres evitar futuros cargos.

Para actualizar tu plan de suscripción actual, simplemente ve a la página de precios de la API y selecciona el plan al que deseas actualizarte. La actualización será instantánea, permitiéndote disfrutar inmediatamente de las funciones del nuevo plan. Ten en cuenta que las llamadas restantes de tu plan anterior no se transferirán al nuevo plan, por lo que debes considerar esto al actualizar. Se te cobrará el monto total del nuevo plan.

Para verificar cuántas llamadas a la API te quedan en el mes actual, revisa el campo 'X-Zyla-API-Calls-Monthly-Remaining' en el encabezado de la respuesta. Por ejemplo, si tu plan permite 1,000 solicitudes por mes y has usado 100, este campo mostrará 900 llamadas restantes.

Para ver el número máximo de solicitudes a la API que permite tu plan, revisa el encabezado de la respuesta 'X-Zyla-RateLimit-Limit'. Por ejemplo, si tu plan incluye 1,000 solicitudes por mes, este encabezado mostrará 1,000.

El encabezado 'X-Zyla-RateLimit-Reset' muestra el número de segundos hasta que tu límite se restablezca. Esto te indica cuándo tu conteo de solicitudes se reiniciará. Por ejemplo, si muestra 3,600, significa que faltan 3,600 segundos para que el límite se restablezca.

Sí, puedes cancelar tu plan en cualquier momento desde tu cuenta, seleccionando la opción de cancelación en la página de Facturación. Ten en cuenta que las actualizaciones, degradaciones y cancelaciones tienen efecto inmediato. Además, al cancelar ya no tendrás acceso al servicio, incluso si te quedaban llamadas en tu cuota.

Puedes contactarnos a través de nuestro canal de chat para recibir asistencia inmediata. Siempre estamos en línea de 8 a. m. a 5 p. m. (EST). Si nos contactas fuera de ese horario, te responderemos lo antes posible. Además, puedes escribirnos por correo electrónico a [email protected]

Para darte la oportunidad de probar nuestras APIs sin compromiso, ofrecemos una prueba gratuita de 7 días que te permite realizar hasta 50 llamadas a la API sin costo. Esta prueba solo se puede usar una vez, por lo que recomendamos aplicarla a la API que más te interese. Aunque la mayoría de nuestras APIs ofrecen prueba gratuita, algunas pueden no hacerlo. La prueba finaliza después de 7 días o cuando realices 50 solicitudes, lo que ocurra primero. Si alcanzas el límite de 50 solicitudes durante la prueba, deberás "Iniciar tu Plan de Pago" para continuar haciendo solicitudes. Puedes encontrar el botón "Iniciar tu Plan de Pago" en tu perfil bajo Suscripción -> Elige la API a la que estás suscrito -> Pestaña de Precios. Alternativamente, si no cancelas tu suscripción antes del día 7, tu prueba gratuita finalizará y tu plan se cobrará automáticamente, otorgándote acceso a todas las llamadas a la API especificadas en tu plan. Ten esto en cuenta para evitar cargos no deseados.

Después de 7 días, se te cobrará el monto total del plan al que estabas suscrito durante la prueba. Por lo tanto, es importante cancelar antes de que finalice el periodo de prueba. No se aceptan solicitudes de reembolso por olvidar cancelar a tiempo.

Cuando te suscribes a una prueba gratuita de una API, puedes realizar hasta 50 llamadas. Si deseas realizar más llamadas después de este límite, la API te pedirá que "Inicies tu Plan de Pago". Puedes encontrar el botón "Iniciar tu Plan de Pago" en tu perfil bajo Suscripción -> Elige la API a la que estás suscrito -> Pestaña de Precios.

Las Órdenes de Pago se procesan entre el día 20 y el 30 de cada mes. Si envías tu solicitud antes del día 20, tu pago será procesado dentro de ese período.

 Nivel de Servicio
100%
 Tiempo de Respuesta
8.257ms

Categoría:


APIs Relacionadas