Who’s speaking about our Text Analytics APIs

That’s how the community speaks about or cites Dandelion API, our Text Analytics API:

  • Provide context for our users using Dandelion API

    Dandelion API provides context to our users

    The Evolution of Linked Data (a one-hour webinar)
    OCLC – Online Computer Library Center – slide 38

  • Dandelion API for your Knowledge parsing needs

    What are some of the use cases?
    Before you can analyze Big Data, in many cases you need to scrub raw text.
    Dandelion API has obvious applications in marketing for analyzing user feedback and sentiment, in data classification of raw content in, say, legal or business contexts, and finally in an area that I’ve been writing about, Google-like searches of corporate file systems.

    Dandelion APIs For Your Knowledge Parsing Needs
    The Technoverse Blog

  • Entity linking and Knowledge Extraction using Dandelion API

    Entity linking

    DBpedia Spotlight, OpenCalais, Dandelion API (was dataTXT), the Zemanta API, Extractiv and PoolParty Extractor analyze free text via Named Entity Recognition and then disambiguates candidates via Name Resolution and links the found entities to the DBpedia knowledge repository[3] (Dandelion API – Entity Extraction demo or DBpedia Spotlight web demo or PoolParty Extractor Demo).

    President Obama called Wednesday on Congress to extend a tax break for students included in last year’s economic stimulus package, arguing that the policy provides more generous assistance.

    As President Obama is linked to a DBpedia LinkedData resource, further information can be retrieved automatically and a Semantic Reasoner can for example infer that the mentioned entity is of the type Person (using FOAF (software)) and of type Presidents of the United States (using YAGO). Counter examples: Methods that only recognize entities or link to Wikipedia articles and other targets that do not provide further retrieval of structured data and formal knowledge.

    Entity linking
    Artikkel – Knowledge extraction

  • Generating Spotify playlists based on your tweets, built with Dandelion API

    Melinda demonstrated her Tweet Tracks project which she built at Hackference 2014. It uses the Entity Extraction API (was dataTXT API) to extract places, people, events, etc. from Tweets and uses those in a Spotify search to identify a track relating to the tweet. We had fun trying to guessing why it had picked some of the tracks.

    Show and Tell 6” blog
    James Mead – Go Free Range Blog

  • Twitter bot integrated with Dandelion API

    @replies4u is an attempt to create a Twitter bot with some degree of (artificial) intelligence.
    The bot is possible due to the explosion in access to public databases through APIs (Application Programming Interfaces). The bot will attempt to respond to any unique mentions that end with a question mark. In the absence of any questions, the bot will periodically tweet interesting quotes.


  • Text Mining and Content Analysis - online course

    A guide to start with the text and content analysis with quantitative and qualitative tools, published on udemy.com.

    Dandelion API is one of the tools used to perform automatic text categorisation.

    Text Mining and Content Analysis on udemy.com

  • Newspapers API Analisys - Spotting the Pirates

    The graph shows a list with the 10 most discussed trends per year, from 2002 to 2014. These trends refer to the New York Times and the Guardian articles, and the trends are calculated by the amount of articles in which the query “File Sharing” is occurred.
    Upload of a dataset into OpenRefine and extracted the entities with Dandelion API (was dataTXT) (filter: 0.6).

    The 10 most discussed trends per year, from 2002 to 2014
    Newspaper’s API analisys

  • Data Visualisation - built with Dandelion API

    Dandelion API (ex dataTXT) propose de la reconnaissance d’entités nommées et de lier chaque entité trouvée avec une entrée Wikipédia. Le premier argument de Dandelion API (was dataTXT) est le fait que leur algorithme semble très bien fonctionner sur des textes très courts, le rendant presque indispensable pour les analyses de réseaux sociaux (notamment, les tweets qui sont limités à 140 caractères).
    Selon son site web (https://dandelion.eu/).
    Dandelion API (was dataTXT) propose une approche mathématique non basée sur le traitement automatique du langage naturel, ce qui permet au service de fonctionner indépendemment de la langue traitée. Néanmoins, la documentation de l’API (https://dandelion.eu/docs/api/datatxt/nex/v1/) précise que seul le support de l’italien, du français et de l’anglais est pour le moment disponible.
    Tout comme AlchemyAPI, Dandelion API (was dataTXT) propose un service complet mais limité à 1000 extractions par jour pour usage non commercial ainsi qu’une version payante permettant plus d’extractions.
    Des licences pour la recherche sont également disponibles sur demande.

    Extraction of nominative entities, an opportunity for the cultural sector?

    NER Extension (in French)

  • Building an Entity Cloud from SERP to monitor the competitors

    Entity Cloud è un nome che mi è venuto in mente l’altra quando pensavo a come rendere automatica l’estrazione di entità da una serie di risorse web.
    Questo perché mi interessava integrarlo con il software che sto sviluppando topic finder. Utile per l’analisi dei competitor.
    In giro non si dice che google diventerà sempre più bravo ad estrarre entità dai testi e quindi a capire in maniera più precisa quello di cui si sta parlando?
    Quindi mi sono chiesto: “come posso estrarre in maniera automatica le entità dei primi 20 siti web presenti in SERP?”
    Grazie alle API messe a disposizione da un progetto che adoro e che si chiama Dandelion API ho sviluppato un piccolo script.
    Lo script analizza la SERP ed esamina solamente le risorse html escludendo video e immagini, per ovvie ragioni.

    Entity Cloud: perché può servirti? (in Italian)
    Luigi Luongo’s blog

  • Entity Search on documents and archives, using Dandelion API

    Secondo, il portale www.fontitaliarepubblicana.it si caratterizza per la semplicità di presentazione e comportamento e soprattutto per l’adozione di un motore di ricerca semantico (dataTXT di Dandelion API ora Entity Extraction API) che offre la ricerca delle entità, ovvero persone, luoghi o concetti individuati e all’interno dei documenti presentati (tramite il software docTrace di Hyperborea.
    Per ogni entità, grazie a un’analisi di tipo semantico, sono evidenziate anche eventuali correlazioni o sinonimi presenti nei testi per indicare la medesima entità.
    Insomma, una preziosa introduzione di un sistema intelligente di chiavi di accesso per le descrizioni archivistiche, andando verso il superamento dei noti problemi di ricerca e accesso per i non esperti.

    Rete e portale degli “Archivi per non dimenticare” (in Italian)

Leave a Reply