Creador de Markdown API

Convierte páginas web en contenido accionable extrayendo texto o convirtiéndolo a markdown para una fácil integración y procesamiento.

Acerca de la API:  

La API de Markdown Maker simplifica el proceso de convertir contenido web en markdown estructurado o texto limpio. Su punto final de texto limpio garantiza que solo se recupere el contenido relevante, eliminando menús, anuncios u otros elementos no esenciales. El punto final de markdown permite además a los desarrolladores transformar contenido en markdown, agilizando los flujos de trabajo para sistemas de gestión de contenido, blogs o documentación. Diseñada para la versatilidad, la API admite una amplia gama de páginas web y formatos para una integración fluida y un rendimiento confiable.

Documentación de la API

Endpoints


Para usar este punto final, envíe una solicitud con la URL de la página web y reciba el texto limpio extraído del contenido de esa página.



                                                                            
POST https://zylalabs.com/api/5661/creador+de+markdown+api/7371/contenido+de+markdown+extracto.
                                                                            
                                                                        

Contenido de Markdown Extracto. - Características del Endpoint

Objeto Descripción
Cuerpo de la Solicitud [Requerido] Json
Probar Endpoint

RESPUESTA DE EJEMPLO DE LA API

       
                                                                                                        
                                                                                                                                                                                                                            {"response":"Spark Basics\nSuppose we have a web application hosted in an application orchestrator like kubernetes. If load in that particular application increases then we can horizontally scale our application simply by increasing the number of pods in our service.\nNow let’s suppose there is heavy compute operation happening in each of the pods. Then there will be certain limit upto which these services can run because unlike horizontal scaling where you can have as many numbers of machines as required, there is limit for vertical scaling because you can’t have unlimited ram and cpu cores for each of the machines in a cluster. Distributed Computing removes this limitation of vertical scaling by distributing the processing across cluster of machines. Now, a group of machines alone is not powerful, you need a framework to coordinate work across them. Spark does just that, managing and coordinating the execution of tasks on data across a cluster of computers. The cluster of machines that Spark will use to execute tasks is managed by a cluster manager like Spark’s standalone cluster manager, Kubernetes, YARN, or Mesos.\nSpark Basics\nSpark is distributed data processing engine. Distributed data processing in big data is simply series of map and reduce functions which runs across the cluster machines. Given below is python code for calculating the sum of all the even numbers from a given list with the help of map and reduce functions.\nfrom functools import reduce\na = [1,2,3,4,5]\nres = reduce(lambda x,y: x+y, (map(lambda x: x if x%2==0 else 0, a)))\nNow consider, if instead of a simple list, it is a parquet file of size in order of gigabytes. Computation with MapReduce system becomes optimized way of dealing with such problems. In this case spark will load the big parquet file into multiple worker nodes (if the file doesn’t support distributed storage then it will be first loaded into driver node and afterwards, it will get distributed across the worker nodes). Then map function will be executed for each task in each worker node and the final result will fetched with the reduce function.\nSpark timeline\nGoogle was first to introduce large scale distributed computing solution with MapReduce and its own distributed file system i.e., Google File System(GFS). GFS provided a blueprint for the Hadoop File System (HDFS), including the MapReduce implementation as a framework for distributed computing. Apache Hadoop framework was developed consisting of Hadoop Common, MapReduce, HDFS, and Apache Hadoop YARN. There were various limitations with Apache Hadoop like it fell short for combining other workloads such as machine learning, streaming, or interactive SQL-like queries etc. Also the results of the reduce computations were written to a local disk for subsequent stage of operations. Then came the Spark. Spark provides in-memory storage for intermediate computations, making it much faster than Hadoop MapReduce. It incorporates libraries with composable APIs for machine learning (MLlib), SQL for interactive queries (Spark SQL), stream processing (Structured Streaming) for interacting with real-time data, and graph processing (GraphX).\nSpark Application\nSpark Applications consist of a driver process and a set of executor processes. The driver process runs your main() function, sits on a node in the cluster. The executors are responsible for actually carrying out the work that the driver assigns them. The driver and executors are simply processes, which means that they can live on the same machine or different machines.\nThere is a SparkSession object available to the user, which is the entrance point to running Spark code. When using Spark from Python or R, you don’t write explicit JVM instructions; instead, you write Python and R code that Spark translates into code that it then can run on the executor JVMs.\nSpark’s language APIs make it possible for you to run Spark code using various programming languages like Scala, Java, Python, SQL and R.\nSpark has two fundamental sets of APIs: the low-level “unstructured” APIs (RDDs), and the higher-level structured APIs (Dataframes, Datasets).\nSpark Toolsets\nA DataFrame is the most common Structured API and simply represents a table of data with rows and columns. To allow every executor to perform work in parallel, Spark breaks up the data into chunks called partitions. A partition is a collection of rows that sit on one physical machine in your cluster.\nIf a function returns a Dataframe or Dataset or Resilient Distributed Dataset (RDD) then it is a transformation and if it doesn’t return anything then it’s an action. An action instructs Spark to compute a result from a series of transformations. The simplest action is count.\nTransformation are of types narrow and wide. Narrow transformations are those for which each input partition will contribute to only one output partition. Wide transformation will have input partitions contributing to many output partitions.\nSparks performs a lazy evaluation which means that Spark will wait until the very last moment to execute the graph of computation instructions. This provides immense benefits because Spark can optimize the entire data flow from end to end.\nSpark-submit\nReferences\n- https://spark.apache.org/docs/latest/\n- spark: The Definitive Guide by Bill Chambers and Matei Zaharia"}
                                                                                                                                                                                                                    
                                                                                                    

Contenido de Markdown Extracto. - CÓDIGOS DE EJEMPLO


curl --location --request POST 'https://zylalabs.com/api/5661/creador+de+markdown+api/7371/contenido+de+markdown+extracto.' --header 'Authorization: Bearer YOUR_API_KEY' 

--data-raw '{
  "url": "https://techtalkverse.com/post/software-development/spark-basics/"
}'

    

Para utilizar este punto final, envíe una solicitud con la URL de la página web y reciba el contenido convertido al formato markdown de esa página.



                                                                            
POST https://zylalabs.com/api/5661/creador+de+markdown+api/7372/web+a+markdown
                                                                            
                                                                        

Web a Markdown - Características del Endpoint

Objeto Descripción
Cuerpo de la Solicitud [Requerido] Json
Probar Endpoint

RESPUESTA DE EJEMPLO DE LA API

       
                                                                                                        
                                                                                                                                                                                                                            {"response":"---\ntitle: Spark Basics\nurl: https://techtalkverse.com/post/software-development/spark-basics/\nhostname: techtalkverse.com\ndescription: Suppose we have a web application hosted in an application orchestrator like kubernetes. If load in that particular application increases then we can horizontally scale our application simply by increasing the number of pods in our service.\nsitename: techtalkverse.com\ndate: 2023-05-01\ncategories: ['post']\n---\n# Spark Basics\n\nSuppose we have a web application hosted in an application orchestrator like kubernetes. If load in that particular application increases then we can horizontally scale our application simply by increasing the number of pods in our service.\n\nNow let’s suppose there is heavy compute operation happening in each of the pods. Then there will be certain limit upto which these services can run because unlike horizontal scaling where you can have as many numbers of machines as required, there is limit for vertical scaling because you can’t have unlimited ram and cpu cores for each of the machines in a cluster. **Distributed Computing** removes this limitation of vertical scaling by distributing the processing across cluster of machines.\nNow, a group of machines alone is not powerful, you need a framework to\ncoordinate work across them. Spark does just that, managing and coordinating the execution of tasks on data across a cluster of computers. The cluster of machines that Spark will use to execute tasks is managed by a cluster manager like Spark’s standalone cluster manager, Kubernetes, YARN, or Mesos.\n\n## Spark Basics\n\nSpark is distributed data processing engine. Distributed data processing in big data is simply series of map and reduce functions which runs across the cluster machines. Given below is python code for calculating the sum of all the even numbers from a given list with the help of map and reduce functions.\n\n```\nfrom functools import reduce\na = [1,2,3,4,5]\nres = reduce(lambda x,y: x+y, (map(lambda x: x if x%2==0 else 0, a)))\n```\n\n\nNow consider, if instead of a simple list, it is a parquet file of size in order of gigabytes. Computation with MapReduce system becomes optimized way of dealing with such problems. In this case spark will load the big parquet file into multiple worker nodes (if the file doesn’t support distributed storage then it will be first loaded into driver node and afterwards, it will get distributed across the worker nodes). Then map function will be executed for each task in each worker node and the final result will fetched with the reduce function.\n\n## Spark timeline\n\nGoogle was first to introduce large scale distributed computing solution with **MapReduce** and its own distributed file system i.e., **Google File System(GFS)**. GFS provided a blueprint for the **Hadoop File System (HDFS)**, including the MapReduce implementation as a framework for distributed computing. **Apache Hadoop** framework was developed consisting of Hadoop Common, MapReduce, HDFS, and Apache Hadoop YARN. There were various limitations with Apache Hadoop like it fell short for combining other workloads such as machine learning, streaming, or interactive SQL-like queries etc. Also the results of the reduce computations were written to a local disk for subsequent stage of operations. Then came the **Spark**. Spark provides in-memory storage for intermediate computations, making it much faster than Hadoop MapReduce. It incorporates libraries with composable APIs for\nmachine learning (MLlib), SQL for interactive queries (Spark SQL), stream processing (Structured Streaming) for interacting with real-time data, and graph processing (GraphX).\n\n## Spark Application\n\n**Spark Applications** consist of a driver process and a set of executor processes. The **driver** process runs your main() function, sits on a node in the cluster. The **executors** are responsible for actually carrying out the work that the driver assigns them. The driver and executors are simply processes, which means that they can live on the same machine or different machines.\n\nThere is a **SparkSession** object available to the user, which is the entrance point to running Spark code. When using Spark from Python or R, you don’t write explicit JVM instructions; instead, you write Python and R code that Spark translates into code that it then can run on the executor JVMs.\n**Spark’s language APIs** make it possible for you to run Spark code using various programming languages like Scala, Java, Python, SQL and R.\nSpark has two fundamental sets of APIs: the **low-level “unstructured” APIs** (RDDs), and the **higher-level structured APIs** (Dataframes, Datasets).\n\n## Spark Toolsets\n\nA **DataFrame** is the most common Structured API and simply represents a table of data with rows and columns. To allow every executor to perform work in parallel, Spark breaks up the data into chunks called partitions. A **partition** is a collection of rows that sit on one physical machine in your cluster.\n\nIf a function returns a Dataframe or Dataset or Resilient Distributed Dataset (RDD) then it is a **transformation** and if it doesn’t return anything then it’s an **action**. An action instructs Spark to compute a result from a series of transformations. The simplest action is count.\n\nTransformation are of types narrow and wide. **Narrow transformations** are those for which each input partition will contribute to only one output partition. **Wide transformation** will have input partitions contributing to many output partitions.\n\nSparks performs a **lazy evaluation** which means that Spark will wait until the very last moment to execute the graph of computation instructions. This provides immense benefits because Spark can optimize the entire data flow from end to end.\n\n## Spark-submit\n\n## References\n\n- https://spark.apache.org/docs/latest/\n- spark: The Definitive Guide by Bill Chambers and Matei Zaharia"}
                                                                                                                                                                                                                    
                                                                                                    

Web a Markdown - CÓDIGOS DE EJEMPLO


curl --location --request POST 'https://zylalabs.com/api/5661/creador+de+markdown+api/7372/web+a+markdown' --header 'Authorization: Bearer YOUR_API_KEY' 

--data-raw '{
  "url": "https://techtalkverse.com/post/software-development/spark-basics/"
}'

    

Clave de Acceso a la API y Autenticación

Después de registrarte, a cada desarrollador se le asigna una clave de acceso a la API personal, una combinación única de letras y dígitos proporcionada para acceder a nuestro endpoint de la API. Para autenticarte con el Creador de Markdown API simplemente incluye tu token de portador en el encabezado de Autorización.
Encabezados
Encabezado Descripción
Autorización [Requerido] Debería ser Bearer access_key. Consulta "Tu Clave de Acceso a la API" arriba cuando estés suscrito.

Precios Simples y Transparentes

Sin compromiso a largo plazo. Mejora, reduce o cancela en cualquier momento. La Prueba Gratuita incluye hasta 50 solicitudes.

🚀 PLAN CORPORATIVO A MEDIDA

Comienza en
$ 10.000/Año


  • Volumen Personalizado
  • Límite de solicitudes personalizado
  • Soporte al Cliente Especializado
  • Monitoreo de API en Tiempo Real

Funciones favoritas de los clientes

  • ✔︎ Paga Solo por Solicitudes Exitosas
  • ✔︎ Prueba Gratuita de 7 Días
  • ✔︎ Soporte Multilenguaje
  • ✔︎ Una Clave API, Todas las APIs.
  • ✔︎ Panel de Control Intuitivo
  • ✔︎ Manejo de Errores Integral
  • ✔︎ Documentación Amigable para Desarrolladores
  • ✔︎ Integración con Postman
  • ✔︎ Conexiones HTTPS Seguras
  • ✔︎ Tiempo de Actividad Fiable

Creador de Markdown API FAQs

The primary function of the Markdown Maker API is to convert web pages into structured markdown or clean text, allowing for easy integration and processing of web content.

The clean text endpoint retrieves only the relevant content from a web page, eliminating menus, ads, and other non-essential elements to provide a focused output.

Yes, the Markdown Maker API is designed to support a wide range of web pages and formats, ensuring versatility and reliable performance for different types of content.

The markdown endpoint allows developers to transform web content into markdown format, which streamlines workflows for content management systems, blogs, and documentation, making it easier to manage and display content.

Yes, the Markdown Maker API is particularly suitable for content management systems as it simplifies the process of extracting and formatting web content, enhancing efficiency and organization.

The clean text endpoint returns a focused text output, stripping away non-essential elements like ads and menus. The markdown endpoint returns structured markdown content, including metadata such as title, URL, description, and categories, along with the main content formatted in markdown.

For the clean text endpoint, the key field is "response," which contains the extracted text. For the markdown endpoint, key fields include "title," "url," "description," "sitename," "date," "categories," and the main content formatted in markdown.

The clean text response is a simple string under the "response" key. The markdown response is structured with metadata fields followed by the main content, allowing easy parsing and integration into applications.

The clean text endpoint provides relevant textual content from a web page, while the markdown endpoint offers both the content and associated metadata, such as title, URL, and categories, facilitating better content management.

Users can customize requests by specifying different URLs for the endpoints. The API processes the content of the provided URL, allowing users to extract or convert various web pages as needed.

Typical use cases include content extraction for blogs, documentation, and content management systems. Developers can automate the process of gathering and formatting web content for easier integration and display.

The Markdown Maker API relies on the structure of the web pages it processes. While it aims to extract relevant content accurately, the quality of the output depends on the source page's structure and content quality.

If the API returns partial or empty results, users should verify the URL provided for accessibility and content availability. Implementing error handling in applications can help manage such scenarios effectively.

General FAQs

Zyla API Hub es como una gran tienda de APIs, donde puedes encontrar miles de ellas en un solo lugar. También ofrecemos soporte dedicado y monitoreo en tiempo real de todas las APIs. Una vez que te registres, puedes elegir qué APIs quieres usar. Solo recuerda que cada API necesita su propia suscripción. Pero si te suscribes a varias, usarás la misma clave para todas, lo que hace todo más fácil para ti.

Los precios se muestran en USD (dólar estadounidense), EUR (euro), CAD (dólar canadiense), AUD (dólar australiano) y GBP (libra esterlina). Aceptamos todas las principales tarjetas de débito y crédito. Nuestro sistema de pago utiliza la última tecnología de seguridad y está respaldado por Stripe, una de las compañías de pago más confiables del mundo. Si tienes algún problema para pagar con tarjeta, contáctanos en [email protected]


Además, si ya tienes una suscripción activa en cualquiera de estas monedas (USD, EUR, CAD, AUD, GBP), esa moneda se mantendrá para suscripciones posteriores. Puedes cambiar la moneda en cualquier momento siempre que no tengas suscripciones activas.

La moneda local que aparece en la página de precios se basa en el país de tu dirección IP y se proporciona solo como referencia. Los precios reales están en USD (dólar estadounidense). Cuando realices un pago, el cargo aparecerá en tu estado de cuenta en USD, incluso si ves el monto equivalente en tu moneda local en nuestro sitio web. Esto significa que no puedes pagar directamente en tu moneda local.

Ocasionalmente, un banco puede rechazar el cargo debido a sus configuraciones de protección contra fraude. Te sugerimos comunicarte con tu banco primero para verificar si están bloqueando nuestros cargos. También puedes acceder al Portal de Facturación y cambiar la tarjeta asociada para realizar el pago. Si esto no funciona y necesitas más ayuda, por favor contacta a nuestro equipo en [email protected]

Los precios se determinan mediante una suscripción recurrente mensual o anual, dependiendo del plan elegido.

Las llamadas a la API se descuentan de tu plan en base a solicitudes exitosas. Cada plan incluye una cantidad específica de llamadas que puedes realizar por mes. Solo las llamadas exitosas, indicadas por una respuesta con estado 200, se contarán en tu total. Esto asegura que las solicitudes fallidas o incompletas no afecten tu cuota mensual.

Zyla API Hub funciona con un sistema de suscripción mensual recurrente. Tu ciclo de facturación comenzará el día en que compres uno de los planes de pago, y se renovará el mismo día del mes siguiente. Así que recuerda cancelar tu suscripción antes si quieres evitar futuros cargos.

Para actualizar tu plan de suscripción actual, simplemente ve a la página de precios de la API y selecciona el plan al que deseas actualizarte. La actualización será instantánea, permitiéndote disfrutar inmediatamente de las funciones del nuevo plan. Ten en cuenta que las llamadas restantes de tu plan anterior no se transferirán al nuevo plan, por lo que debes considerar esto al actualizar. Se te cobrará el monto total del nuevo plan.

Para verificar cuántas llamadas a la API te quedan en el mes actual, revisa el campo 'X-Zyla-API-Calls-Monthly-Remaining' en el encabezado de la respuesta. Por ejemplo, si tu plan permite 1,000 solicitudes por mes y has usado 100, este campo mostrará 900 llamadas restantes.

Para ver el número máximo de solicitudes a la API que permite tu plan, revisa el encabezado de la respuesta 'X-Zyla-RateLimit-Limit'. Por ejemplo, si tu plan incluye 1,000 solicitudes por mes, este encabezado mostrará 1,000.

El encabezado 'X-Zyla-RateLimit-Reset' muestra el número de segundos hasta que tu límite se restablezca. Esto te indica cuándo tu conteo de solicitudes se reiniciará. Por ejemplo, si muestra 3,600, significa que faltan 3,600 segundos para que el límite se restablezca.

Sí, puedes cancelar tu plan en cualquier momento desde tu cuenta, seleccionando la opción de cancelación en la página de Facturación. Ten en cuenta que las actualizaciones, degradaciones y cancelaciones tienen efecto inmediato. Además, al cancelar ya no tendrás acceso al servicio, incluso si te quedaban llamadas en tu cuota.

Puedes contactarnos a través de nuestro canal de chat para recibir asistencia inmediata. Siempre estamos en línea de 8 a. m. a 5 p. m. (EST). Si nos contactas fuera de ese horario, te responderemos lo antes posible. Además, puedes escribirnos por correo electrónico a [email protected]

Para darte la oportunidad de probar nuestras APIs sin compromiso, ofrecemos una prueba gratuita de 7 días que te permite realizar hasta 50 llamadas a la API sin costo. Esta prueba solo se puede usar una vez, por lo que recomendamos aplicarla a la API que más te interese. Aunque la mayoría de nuestras APIs ofrecen prueba gratuita, algunas pueden no hacerlo. La prueba finaliza después de 7 días o cuando realices 50 solicitudes, lo que ocurra primero. Si alcanzas el límite de 50 solicitudes durante la prueba, deberás "Iniciar tu Plan de Pago" para continuar haciendo solicitudes. Puedes encontrar el botón "Iniciar tu Plan de Pago" en tu perfil bajo Suscripción -> Elige la API a la que estás suscrito -> Pestaña de Precios. Alternativamente, si no cancelas tu suscripción antes del día 7, tu prueba gratuita finalizará y tu plan se cobrará automáticamente, otorgándote acceso a todas las llamadas a la API especificadas en tu plan. Ten esto en cuenta para evitar cargos no deseados.

Después de 7 días, se te cobrará el monto total del plan al que estabas suscrito durante la prueba. Por lo tanto, es importante cancelar antes de que finalice el periodo de prueba. No se aceptan solicitudes de reembolso por olvidar cancelar a tiempo.

Cuando te suscribes a una prueba gratuita de una API, puedes realizar hasta 50 llamadas. Si deseas realizar más llamadas después de este límite, la API te pedirá que "Inicies tu Plan de Pago". Puedes encontrar el botón "Iniciar tu Plan de Pago" en tu perfil bajo Suscripción -> Elige la API a la que estás suscrito -> Pestaña de Precios.

Las Órdenes de Pago se procesan entre el día 20 y el 30 de cada mes. Si envías tu solicitud antes del día 20, tu pago será procesado dentro de ese período.


APIs Relacionadas


También te puede interesar