Jekyll 2022-06-29T15:59:23+02:00 <![CDATA[ArcheoFOSS 2012: one week left to submit your abstracts]]> 2012-03-20T00:00:00+01:00 2012-03-20T00:00:00+01:00 Stefano Costa <p>There is one week left to submit your abstracts for the 2012 ArcheoFOSS workshop. The deadline is on Monday 26th March 2012. The call for papers and posters (in Italian) is available on the main website <a href=""></a> together with details on the submission procedure.</p> <p>This year we’re aiming for a wider participation. On one hand, this means that we expect more people to help us with spreading the word about the call for papers, and to submit proposals on their own. On the other hand, we will need to operate a selection for the more interesting abstracts, to keep the quality of this event higher than ever.</p> <p>If you are doing archaeological research in italy and you use free and open source software or you’re interested in discussing it, this is the one meeting you should go to.</p> <p><a href="">ArcheoFOSS 2012: one week left to submit your abstracts</a> was originally published by Stefano Costa at <a href=""></a> on March 20, 2012.</p> <![CDATA[ArcheoFOSS 2012: ancora una settimana per inviare le vostre proposte]]> 2012-03-20T00:00:00+01:00 2012-03-20T00:00:00+01:00 Stefano Costa <p>C’è ancora una settimana per presentare relazioni e poster al workshop ArcheoFOSS 2012. La scadenza è lunedì 26 marzo 2012. La call for papers si trova sul sito web <a href=""></a> insieme alle informazioni sull’evento e sull’invio delle vostre proposte.</p> <p>Quest’anno puntiamo ad avere una ampia partecipazione: questo significa che da un lato ci aspettiamo da parte di tutti gli interessati una collaborazione alla diffusione di questa call e l’invio di proposte, ma dall’altro lato dovremo necessariamente selezionare le proposte più interessanti per mantenere un livello qualitativo soddisfacente.</p> <p><a href="">ArcheoFOSS 2012: ancora una settimana per inviare le vostre proposte</a> was originally published by Stefano Costa at <a href=""></a> on March 20, 2012.</p> <![CDATA[oculu-Z, the open source, open data platform for collaborative Computer Vision technology]]> 2011-11-16T00:00:00+01:00 2011-11-16T00:00:00+01:00 Stefano Costa <p><a href="">Oculu-Z</a> is a new effort to gather a community of developers and users (is there still a difference between the two?) around open source techniques of 3D reconstruction from digital images. Community means YOU and me, and the Oculu-Z team is lead by Benjamin Ducke, one of the most prominent experts in archaeological computing.</p> <p>This looks like a very good opportunity for experimenting with your own images (either dig or finds, it doesn’t matter) and develop shared best practices.</p> <p><a href="">oculu-Z, the open source, open data platform for collaborative Computer Vision technology</a> was originally published by Stefano Costa at <a href=""></a> on November 16, 2011.</p> <![CDATA[10 archeologi italiani da non perdere ]]> 2011-11-16T00:00:00+01:00 2011-11-16T00:00:00+01:00 Stefano Costa <p>Il blog <a href="">Archeologia 2.0</a> ha stilato un <a href="">elenco</a> dei 10 archeologi italiani più attenti all’innovazione digitale, alle nuove tecnologie, e agli sviluppi dei media sociali.</p> <p> è tra questi dieci, e si parla anche del <a href="">wiki di archeologia quantitativa</a>. Ma non è tutto qui, perché tra questi dieci c’è anche Giuliano De Felice che con me e con molte altre persone ha portato avanti ArcheoFOSS negli ultimi anni. Lo considero un giudizio positivo su ArcheoFOSS nel suo complesso e sulla comunità che vi ruota intorno, con esperienze consolidate (tra cui IOSA) e quelle in crescita (gNewArchaeology, ad esempio). E altri due tra i “nominati” sono passati da ArcheoFOSS, ovvero Helga Di Giuseppe (Padova 2008) e Gabriele Gattiglia (Roma 2009).</p> <p>Ma c’è anche uno spunto per riflettere, per porsi domande e cercare insieme le parole per costruire le risposte più adatte a quello che ognuno di noi fa. Innovazione non è solo crescita tecnologica e progresso luccicante: ci sono da creare competenze a partire dalla formazione di base, c’è una nuova dimensione pubblica e non intellettuale dell’archeologia ancora tutta da pensare (a meno di non voler diventare noi stessi reperti archeologici). C’è, tra le altre cose, da iniziare a pensare cosa diamo in cambio del software libero che usiamo.</p> <p>Voi cosa ne pensate? Io penso che ArcheoFOSS sia una fucina di innovazione e sperimentazione, e questo elenco è una piccola cosa ma sembra andare in quella direzione. Avanti così, allora.</p> <p><a href="">10 archeologi italiani da non perdere </a> was originally published by Stefano Costa at <a href=""></a> on November 16, 2011.</p> <![CDATA[SVG Pottery: the documentation is now available]]> 2011-10-19T00:00:00+02:00 2011-10-19T00:00:00+02:00 Stefano Costa <p>Thanks to <a href=""></a>, the SVG Pottery documentation project is now available directly at <a href=""></a>, and it is always updated to the latest version.</p> <p>We are looking for examples of SVG usage for pottery and other archaeological finds in digital publications. You have got your own drawings and would like to publish them? Let us know by commenting here!</p> <p><a href="">SVG Pottery: the documentation is now available</a> was originally published by Stefano Costa at <a href=""></a> on October 19, 2011.</p> <![CDATA[SVG Pottery: pottery drawings on the Web]]> 2011-05-30T00:00:00+02:00 2011-05-30T00:00:00+02:00 Stefano Costa <p>A few months ago I posted here a <em><a href="">vision</a></em> to put pottery drawings on the Semantic Web. Now I’m trying to follow up on that vision, going into the details and problems that it brings to light.</p> <p>I have created a <a href="">new repository at bitbucket</a> to give this idea more substance than a series of blog posts. For now there is nothing but a small set of pages written in reStructured Text for use with <a href="">Sphinx</a>. The plan is to add SVG samples, possibly some XLST snippets, and I know that I’m going to write some Python code.</p> <p>There are some strong assumptions here:</p> <ul> <li>you want to publish your drawings on the Web</li> <li>you want to do it in a standards-compliant, sustainable way</li> <li>you are ready to <em>change</em> your current publishing workflow</li> <li>you like to experiment</li> </ul> <p>These four reasons explain why SVG is the holy grail we should aim for. There is almost no prior art, so I’m adopting an incremental approach, from simple to complex:</p> <ul> <li>use SVG for new work or convert your legacy drawings to SVG</li> <li>add metadata to SVG (possibly by editing source code directly)</li> <li>deal with units and coordinate systems</li> <li>experiment with extracting data from SVG for further processing</li> </ul> <p>Should you want to join me in this experiment, you are very welcome!</p> <p><a href="">SVG Pottery: pottery drawings on the Web</a> was originally published by Stefano Costa at <a href=""></a> on May 30, 2011.</p> <![CDATA[Pottery drawings and the semantic web]]> 2011-01-10T00:00:00+01:00 2011-01-10T00:00:00+01:00 Stefano Costa <p>It’s almost impossible to know how many <a href="">potsherds</a> have been drawn by archaeologists up to date. Their number is however no doubt well over one million (educated guesses welcome). However, when it turns to the standard question “How many of these drawings are on the web?”, there is going to be some disappointment. Especially if you don’t count <a href=";id=9UAOAQAAIAAJ">Google Books</a>, and in fact you should not – I’m going to explain why in a moment.</p> <p>In 2010, most drawings are still done by hand on a piece of paper, but later it’s common to digitize these drawings using vector graphics software like Autodesk AutoCAD™, Adobe Illustrator™ or <a href="">Inkscape</a>. There are some marked differences between choosing CAD or a “graphics” program like Inkscape or Illustrator (below shortened as “I”):</p> <ul> <li>“I” don’t manage measurement units, you can just refer to the size of the paper (e.g. A4)</li> <li>CAD is very good at producing high-quality printouts, with fine tuning of line-widths and similar settings</li> <li>CAD uses file formats that are understood almost only by CAD software (DXF, DWG)</li> <li>“I” are capable of (natively) managing open file formats like SVG</li> </ul> <p>There are many more differences, but these are the ones of interest here. I will make now a brief digression on why we do so many drawings and what are their purposes from different points of view.</p> <p>The first purpose of the drawing is to <em>see</em> the potsherd. This is true both because you look at it very carefully during the drawing process, and because the final result allows you and others to see it in a standardized layout, with some possibly hidden details clearly viewable on a clean surface. By no means I’m saying that a drawing is the best representation of a potsherd nor that it is the only one that should be considered for the purpose of dissemination and publication.</p> <p>The second reason for drawing is being able to quickly go through an assemblage and develop typologies. Drawings make all the work easier and can be moved freely, while archaeological finds often cannot be moved from their storage place, for a number of reasons including lack of physical space, restrictions to movement imposed by conservation officers or even cultural heritage laws. Your drawings are a valuable digital copy of the assemblage, most probably together with a database. A digital copy of your drawings is one step further in the de-materialization of the archaeological assemblage (I’m going to write more about this issue soon).</p> <p>The third and more prominent use of drawings is for publishing and dissemination of archaeological contexts, which follow <em>de facto</em> standards in each archaeological sub-domain, either imposed by custom or editors. Some publications and excavation reports feature hundreds of drawings. Yet, it seems like drawings are just passive illustrations that have no chance for being indexed, processed and disseminated in a proper format and I want something better than a raster image for my drawings, not just on my laptop but on the Web, too. Having your excavation publication in Google Books, or even as a downloadable PDF, is not what I’m envisioning here. Just like <a href="">Tim Berners-Lee</a>, I need <em>raw data</em> that I can build upon and play with.</p> <p>Give these three assumptions, it comes as an obvious observation that I need an <em>open format</em> for my data, and this translates to <a href="">SVG</a>. I’m not sure it’s the best format, but it’s certainly a decent one, that has a fairly good support both on the desktop and on the web and is not entirely obscure to the masses. Furthermore SVG has been the subject of <a href="">several</a> <a href="">digital</a> <a href="">experiments</a> in archaeological publishing and <a href="">dissemination</a>. I realize that most efforts have been in the field of mapping and GIS ‒ and I assume that the reason of this irregularity is the fact that archaeologists doing GIS are generally speaking more tech-inclined than those doing ceramics. This is a recurring problem underlying and causing lots of the hyper-specialization we see today.</p> <p>The only actual example of such an approach is <a href="">Greek, Roman and Byzantine Pottery at Ilion</a> (GRBP) by <a href="">Sebastian Heath</a> and Billur Tekkök. If you download the <code class="language-plaintext highlighter-rouge">.tar.gz</code> archive containing the entire website, you will find lots of SVG “source” files in the <code class="language-plaintext highlighter-rouge">grbpottery/svg</code> directory, along with their JPEG rasterization. The SVG files are not used in the public website, but are behind the scenes and were produced using Adobe Illustrator. With an increasing support of SVG by major web browsers (including IE 9), it will be easy for GRBP to switch from JPEG to SVG if it turns out to be convenient.</p> <p>Scale and units are provided in the GRBP drawings by a scale within the drawing itself, overcoming one (possibly serious) limitation of the SVG format: there is no straight way to use real measurement units. A second limitation is the orientation of the drawing: like most digital imaging, SVG has the origin in the top-left corner, whereas a bottom-left origin would be much more familiar to anyone who knows the Cartesian system. This topic is worth a separate discussion, but in the meantime you can take a look at <a href="">Kotyle</a>, a software to compute the capacity of ceramic vessels.</p> <p>Some considerations apply only if you have in mind a traditional publication work-flow, with a book at the end of the process. Instead, let’s assume we want to try out a new publishing process, geared towards the web. Since 2008, <a href="">SVG can include RDFa attributes</a>. Period.</p> <p>RDFa attributes mean annotations <em>inside</em> the drawing that are <em>machine-readable</em> ‒ I’m thinking about “rim” and “handle” getting their own attributes, just like an HTML <code class="language-plaintext highlighter-rouge">p</code> or <code class="language-plaintext highlighter-rouge">span</code>. I’m going to include the author name in the metadata, together with semantic links to the original context and the comparisons for the shape and decoration. This is something that needs to be done by hand, but could be done within a dedicated editor. Maybe directly <a href="">on the web</a>.</p> <p>My idea for a sustainable digital work-flow of pottery drawings would be thus along these lines:</p> <ul> <li>draw on paper, following traditional procedures</li> <li>digitize paper drawing in Inkscape, using the GRBP model</li> <li>add semantic annotations by hand with a text editor, including authorship and links to context and comparisons/typologies</li> <li>publish a collection of SVG drawings, alongside their raster versions for users with legacy systems</li> </ul> <p>This is all very simplistic, and that’s exactly why I’m publishing it here. All comments are welcome, and would be geared towards the collaborative writing of a draft shared protocol.</p> <p><a href="">Pottery drawings and the semantic web</a> was originally published by Stefano Costa at <a href=""></a> on January 10, 2011.</p> <![CDATA[L'Archeologia va in Open Source]]> 2011-01-07T00:00:00+01:00 2011-01-07T00:00:00+01:00 Stefano Costa <p>Today an article that I wrote was published on Nòva24, a weekly supplement to “Il Sole 24 ore”, a major Italian newspaper. You can read it online at <a href=""></a>. Despite its title, the article deals with the challenges that society is posing to the cultural heritage sector, and why technology matters to the debate. I advocate an approach made of openness towards the public as a means to give value to archaeology. This is obviously the approach of the IOSA project, that has now entered its seventh year and encompasses a wide range of topics, from free software to open formats and standards for digital storage, and open data. Thanks to Raimondo Iemma who kindly asked me to write this article.</p> <p><a href="">L'Archeologia va in Open Source</a> was originally published by Stefano Costa at <a href=""></a> on January 07, 2011.</p> <![CDATA[Archaeology as text and archaeology as image]]> 2010-11-24T00:00:00+01:00 2010-11-24T00:00:00+01:00 Stefano Costa <p>Last April I started to collect some sketch notes about archaeology as text. It’s not about reading archaeological remains as a text to be read (this could be considered quite a standardized post-processual approach) but rather about how we gain new archaeological knowledge. I quickly came to affirm that:</p> <blockquote> <p><em>§ A large part of archaeological knowledge is transmitted by means of text, including books, journals, excavation reports and diaries, database alphanumeric records, and other.</em></p> </blockquote> <p>So, rather than being a <em>primary source</em> (speaking in terms of historical archaeology), archaeology itself becomes a <em>secondary source</em>, that needs to be managed and approached from another point of view. Reading long descriptions of soil layers on top of collapsed walls is not the same as digging the same for yourself. Going through massive tables of quantified data about archaeological pottery is another thing than working for months on a ceramic assemblage to produce those tables. One might argue that for experienced archaeologists there is no difference between the two, as they know exactly what the author of a certain text wants to tell ‒ I accept the fact that textual communication is taking place without any errors in such cases, but I question the identity between written archaeology and material archaeology. Rather than falling back into the 12th century and the the problem of universals, I’m interested in a reflexive approach to the creation of archaeological knowledge.</p> <p>Text dominates the transmission of archaeological knowledge (and sometimes I’ve heard words of blame towards glossy books with lots of images and little text providing a structured discourse). Especially in the sub-domain of excavation reports, there is a distinct, formalized ‒ artificial we might say ‒ language, that is targeted to bureaucracy rather than to conveying meaning. I like to read those reports and I think they are the main source for what I know about Late Antique Italy, for example. I’m always surprised at their diversity, and still I can find striking similarities among most of them, first and foremost in how the description of excavated contexts and features is kept logically and physically separated from the presentation of finds (like ceramics and coins).</p> <p>It wasn’t always like this. According to <a href="">Gavin Lucas</a>, there has been a clear change in how archaeological publications (particularly excavation reports) mix text and images. There’s a quote attributed to <a href="">Augustus Pitt Rivers</a> that captures the distance:</p> <blockquote> <p>Don’t illustrate your descriptions. Describe your illustrations.</p> </blockquote> <p>(G. Lucas, <em><a href="">Critical Approaches to Fieldwork</a></em>, p. 211)</p> <p>In my research group at the University of Siena, we have been using a like-minded approach for two years now, and we use semi-aerial photographs of our excavation areas as drawing boards for taking notes and sketching interpretive plans of building rooms. We call them “annotated maps” in a consciously critical view of how GIS is currently used for on-site archaeological data recording, and try to find a mix of text, objective representation of materialities and (multiple) interpretations. It’s not always perfect, and there are some things that can go wrong.</p> <p>This debate also involves how archaeological photography is used. As with any (disruptive) technology, my view is that it’s far from being an objective recording technique, unless we deliberately adopt <em>very detailed</em> instructions on how to take photographs: this is more or less what has happened with images of trenches and contexts. Personally, I can’t see any advantage in applying such mechanical procedures, apart from a reassuring homogenization. It’s not by chance that both text and images have undergone the same process of formalization. The prevalence of text above images is likely explained by its abstract nature, when compared to the (apparent) fidelity of photography to the materiality of archaeology.</p> <p><em>This is the first of two posts dealing with archaeology, text and media. The next post will appear next week and is entitled “Archaeology beyond text and media”</em> <!--break--></p> <p><a href="">Archaeology as text and archaeology as image</a> was originally published by Stefano Costa at <a href=""></a> on November 24, 2010.</p> <![CDATA[Publishing digital collections. An experiment with “Ancient Marbles”]]> 2010-11-18T00:00:00+01:00 2010-11-18T00:00:00+01:00 Stefano Costa <p>Everyone is taking thousands of digital photographs each year. For an archaeologist, it’s common practice to collect pictures of museum artifacts, archaeological sites and landscapes.</p> <p>I want to push the concept of “digital collection” going beyond the traditional habit of creating a database. Federico Marri and I have started to build <em>something</em> about Ancient marbles, that is all semi-precious building and decorative stones that were used in the Greek and Roman Classical antiquity and afterwards.</p> <div style="text-align:center"><a href="" title="Greenstone from Thessaly di archeosteko, su Flickr"><img src="" width="180" height="240" alt="Greenstone from Thessaly" /></a></div> <p>We are building <em>what</em>? Something. Let me me explain in more detail.</p> <p>For example, take the <a href="">Ancient Marbles Wiki</a> (hosted at Wikia). As a self-appointed <span lang="el">Υπουργος της βικιποιεσις</span> I have a strong motivation to use a wiki as a place to create shared knowledge. It already has some basic content, and we are pulling in descriptions from public domain scholarly reference texts like those by Pullen and Corsi (both available for free on <a href=""></a> or Google Books). We have a loosely defined set of information we want to collect about all kinds of marble, but when that is done, it’s entirely up to the wiki contributors wheter a simple “record sheet” will become a truly enciclopedic page or not. This is not a database, but when content will be more exhaustive it will be usable as a source for Wikipedia itself, or could be directly linked to as a reference for ancient marbles. We believe in linking, but this wiki is for now a single, simple platform. It’s not inherently part of the web.</p> <p>Take #2. Introduce <a href=""></a> and <a href="">flickr</a>. The flux of new material about ancient marble is not consistent, and varies in time. Starting yet another blog is just the simplest thing in the world, and it looks like there isn’t yet a <a href="">blog about ancient marbles</a>. Obviously, gives lots of space and flexibility, but more than that it enables us to switch to a self-hosted webserver in 15 minutes. Other publishing platforms are equally well-equipped (think tumblr or posterous) but don’t have a .tar.gz on their home page. Enough for the freedom. Everytime we find some nice column, base, or slab, we are going to post a new short article about it, with a picture. And here comes the “trick”. The image is not hosted on, but on flickr. Flickr gives you unlimited space, 100 MB worth of upload each month, automated CC licensing of your content, and some other interesting features, like <a href="">OpenStreetMap tagging</a>. Again, there are dozens of photo uploading websites, but flickr is the only one that offers this capability. OSM tagging means that there’s a link between the photograph and the OSM feature, like a church, a museum, an archaeological site, a monument. To know the OSM way id of your selected feature, follow these simple steps:</p> <ol> <li>go to the map</li> <li>zoom in as much as possible on the area where the building is</li> <li>from the right toolbar toggle the “Data” layer on</li> <li>when the sidebar is loaded on the left, you will be able to select the building with your pointer</li> <li>you will see a link to its page in the left sidebar</li> <li>from that page, you will find the “way ID”</li> </ol> <p>In OSM jargon, each building is a “way” and flickr wants to know the way ID, which should then be added as a “machine tag” to the photograph, like <code class="language-plaintext highlighter-rouge">osm:way=32260844</code> for the <a href="">Βυζαντινό και Χριστιανικό Μουσείο</a> (Byzantine and Christian Museum) in Athens. From that point, flickr knows that this photograph is about that <em>place</em>, it’s more than a couple of geographic coordinates. See for example all the pictures tagged with <a href=""><code class="language-plaintext highlighter-rouge">osm:way=10973689</code></a>, best known as the <a href="">Parthenon</a>. You can get feeds for each tag and literally “follow” your favourite monument.</p> <p>You may argue that this is way more complicated and geek than just using flickr’s built-in map, but what if that map was based on OSM as well? Yes, we all know, coverage is less than desirable especially out of large urban centres, but since Greece is <a href="">opening up lots of their geodata</a> I’m confident about huge improvements in the next year. Also, don’t forget that you can add new content to OpenStreetMap. I just added the <a href="">Antiquarium</a> of Ostia Antica, where we took some fantastic photographs last month. This “feature” has also its drawbacks: anyone could delete a way and break the system. Alas, nothing is perfect. Comments welcome.</p> <p><a href="">Publishing digital collections. An experiment with “Ancient Marbles”</a> was originally published by Stefano Costa at <a href=""></a> on November 18, 2010.</p> <![CDATA[Archaeological geophysics with free open source software - tpq]]> 2010-07-05T00:00:00+02:00 2010-07-05T00:00:00+02:00 Stefano Costa <p>As many readers will know, since 2008 we’ve been working to enable archaeologists and their colleagues with a chance to do professional surveying on free operating systems. What sounded like a silly idea is now reality in the <a href="">Total Open Station</a> project.</p> <p>Total stations do a nice job, but that’s you do something more when you’re interested in what’s <em>beneath</em> the soil rather than on top of it. Enter geophysics, with another load of costly hardware tools that are locked in proprietary combinations of software and drivers.</p> <p>In case you’re using GNU/Linux or another free operating system, you might find quite difficult to interact with your geophysical device, without any dedicated software. As you might expect, you’re not the first to encounter this problem (you’re <em>never</em> going to be the first) and someone else already started working on a solution.</p> <p><a href="">Tractatus Post Quem</a> (tpq) by John Donovan is a</p> <blockquote> <p>collection of tools useful to the archaeological geophysicist and surveyor, focussing on the conversion of data from proprietary formats, georeferencing them, and tagging with metadata. <!--break--> Wow, seems very promising. You find two different programs in the repository:</p> </blockquote> <ul> <li><code class="language-plaintext highlighter-rouge">rm15dump</code> downloads data from Geoscan Research RM15-D resitivity meters</li> <li><code class="language-plaintext highlighter-rouge">tpq</code> proper is a package of Python tools for processing archaeological geophysical data, integrated with <a href="">GDAL</a></li> </ul> <h3 id="rm15dump"><code class="language-plaintext highlighter-rouge">rm15dump</code></h3> <p><code class="language-plaintext highlighter-rouge">rm15dump</code> is a command-line UNIX program written in C++. It downloads data from <a href="">Geoscan Research</a> RM15-D resitivity meters, and outputs it into a variety of useful formats. According to the author, it has only been tested on non-multiplexed RM15-D units, but other hardware (i.e. RM4 with DL10 and RM15) should also work.</p> <p>It is quite easy to build it from source, with the provided makefile.</p> <h3 id="tpq"><code class="language-plaintext highlighter-rouge">tpq</code></h3> <p><code class="language-plaintext highlighter-rouge">tpq</code> is instead a Python package with dedicated modules like</p> <ul> <li><code class="language-plaintext highlighter-rouge">destripe</code>: removes stripes from tiles of geophysical data, AKA Zero mean traverse</li> <li><code class="language-plaintext highlighter-rouge">from_xyz</code>: converts from a text xyz file to a TPQ image</li> <li><code class="language-plaintext highlighter-rouge">highpass</code>: a highpass filter</li> <li><code class="language-plaintext highlighter-rouge">to_geotiff</code>: converts an image to GeoTIFF</li> </ul> <p>All these tools implement standard processing techniques in a very low-level fashion. Your mileage may vary: if you’re used to point-and-click operation and you don’t actually know what you’re doing when processing geophysical data, these tools won’t work. On the other hand, if you’re willing to operate <em>and</em> understand the analytical process, if you like to hack your tools, or if you just like software freedom, this should be your way to go.</p> <p>I like a lot the choice of Python as a programming language for archaeological purposes. In this case, <a href="">NumPy</a> proves again to be a powerful numerical programming environment, allowing for high performance and clear code.</p> <p>Integration with GDAL is very convenient, because it allows for a seamless workflow to GIS processing. I’ll be doing some tests in the next months with my department colleagues, if you do the same please publish your results and let the author know.</p> <p><a href="">Archaeological geophysics with free open source software - tpq</a> was originally published by Stefano Costa at <a href=""></a> on July 05, 2010.</p> <![CDATA[COMMUNIA Conference "University and CyberSpace": I'm going]]> 2010-06-17T00:00:00+02:00 2010-06-17T00:00:00+02:00 Stefano Costa <p>There is a great conference in Turin in 11 days: <a href="">COMMUNIA Conference “University and CyberSpace”</a>. The list of speakers is just amazing, if you would like to talk about open archaeology and cultural heritage in general, please come and find me.</p> <p><a href="">COMMUNIA Conference "University and CyberSpace": I'm going</a> was originally published by Stefano Costa at <a href=""></a> on June 17, 2010.</p> <![CDATA[A look at pollen data in the Old World]]> 2010-06-15T00:00:00+02:00 2010-06-15T00:00:00+02:00 Stefano Costa <p>Since the 19th century, the study of archaeobotanical remains has been very important for combining “strict” archaeological knowledge with environmental data. Pollen data enable assessing the introduction of certain domesticated species of plants, or the presence of other species that grow typically where humans dwell. Not all pollen data come from archaeological fieldwork, but the relationship among the two sets is strong enough to take an interested look at pollen data worldwide, their availability and most importantly their <strong>openness</strong>, for which we follow the <a href="">Open Knowledge Definition</a>.</p> <p>The starting point for finding pollen data is the <a href="">NOAA website</a>.</p> <p>The <a href="">Global Pollen Database </a> hosted by the NOAA is a good starting point, but apparently its coverage is quite limited outside the US. Furthermore, data from 2005 onwards aren’t available via <a href="">FTP</a> in <a href="">simple documented formats</a>, but are instead downloadable <em>as Access databases</em> from <a href="">another external website</a>. Defining Access databases as a Bad Choice™ for data exchange is perhaps an euphemism.</p> <p>Unfortunately, a large number of databases covering single continents or smaller regions is growing, and the approaches to data dissemination show marked differences.</p> <h3 id="americas">Americas</h3> <p>For both North and South America, you can get data from more than one thousand sites directly via FTP. There are no explicit terms of use. Usually, data retrieved from federal agencies are public domain data.</p> <p>The <a href="">README document</a> only states <code class="language-plaintext highlighter-rouge">NOTE: PLEASE CITE ORIGINAL REFERENCES WHEN USING THIS DATA!!!!!</code>. Attribution is consistent with the requirements of the <a href="">Open Knowledge Definition</a>, and it’s also good scholarly practice.</p> <ul> <li><a href="">North American Pollen Database</a></li> <li><a href="">Latin American Pollen Database</a></li> <li><a href="">Global Pollen Database</a></li> </ul> <h3 id="europe">Europe</h3> <p>From the GPD website we can easily reach the <a href="">European Pollen Database</a>, that is found at another website tough (and things can be even more confusing, provided that the NOAA website has some dead links).</p> <p>You can download EPD data in PostgreSQL dump format (one file for each table, with a separate SQL script <code class="language-plaintext highlighter-rouge">create_epd_db.sql</code>). Data in the EPD can be restricted or unrestricted. That’s fine, let’s see how many unrestricted datasets there are. Following the database documentation, the <code class="language-plaintext highlighter-rouge">P_ENTITY</code> table contains the use status of each dataset:</p> <p><code> steko@gibreel:~/epd-postgres-distribution-20100531$ cat p_entity.dump | awk -F "\t" {' print $5 '} | sort | uniq -c 154 R 1092 U </code></p> <p>which is pretty good because almost 88% of them are unrestricted (NB I write most of my programs in Python but I <em>love</em> one liners that involve <code class="language-plaintext highlighter-rouge">awk</code>, <code class="language-plaintext highlighter-rouge">sort</code> and <code class="language-plaintext highlighter-rouge">uniq</code>). We could easily create an “unrestricted” subset and make it available for easy download to all those who don’t want to mess up with restricted data.</p> <p>But what do “unrestricted” mean for EPD data? Let’s take a more careful <a href="">look</a> (emphasis mine):</p> <blockquote> <ol> <li>Data will be classified as restricted or unrestricted. All data will be available in the EPD, although restricted data can be used only as provided below.</li> <li>Unrestricted data are available <em>for all uses</em>, and are included in the EPD on various electronic sites.</li> <li>Restricted data may be used only by permission of the data originator. Appropriate and ethical use of restricted data is the responsibility of the data user.</li> <li>Restrictions on data will expire three years after they are submitted to the EPD. Just prior to the time of expiration, the data originator will be contacted by the EPD database manager with a reminder of the pending change. The originator may extend restricted status for further periods of three years by so informing the EPD each time a three-year period expires.</li> </ol> </blockquote> <p>Sounds quite good, doesn’t it? “for all uses” is reassuring and the short time limit is a good trade off. The horror comes a few paragraphs below with the following scary details:</p> <blockquote> <ol> <li>The data are available only to non-profit-making organizations and for research.</li> </ol> <p>Profit-making organizations may use the data, even for legitimate uses, only with the written consent of the EPD Board, who will determine or negotiate the payment of any fee required.</p> </blockquote> <p>Here the false assumption that only academia is entitled to perform research is taken for granted. And there are even more <a href="">rules</a> about the “normal ethics”: basically if you use EPD data in a publication the original data author should be listed among the authors of the work. I always thought citation and attribution were invented just for that exact purpose, but it looks like they have distinctly different approach to <em>attribution</em>. The EPD is even deciding what are “legitimate” uses of pollen data (I can hardly think of any possible unlegitimate use).</p> <ul> <li><a href="">European Pollen Database</a></li> </ul> <h3 id="africa">Africa</h3> <p>You write “Africa” but you read “Europe” again, because most research projects are from French and English universities. For this reason, the situation is almost the same. What is even worst is that in developing countries there are far less people or organizations that can afford buying those data, notwithstanding the fact that in regions under rapid development the study and preservation of environmental resources are of major importance.</p> <p>Data are downloadable for individual sites using a <a href="">search engine</a>, in Tilia format (not ASCII unfortunately). The problems come out with the license:</p> <p>The wording is almost exactly the same as for the EPD seen above:</p> <blockquote> <p>Normal ethics pertaining to co-authorship of publications applies. The contributor should be invited to be a co-author if a user makes significant use of a single contributor’s site, or if a single contributor’s data comprise a substantial portion of a larger data set analysed, or if a contributor makes a significant contribution to the analysis of the data or to the interpretation of the results. The data will be available only to non-profit-making organisations and for research. Profit-making organisations may use the data for legitimate purposes, only with the written consent of the majority of the members of the Advisory board, who will determine or negotiate the payment of any fee required. Such payment will be credited to the APD.</p> </blockquote> <ul> <li><a href="">African Pollen Database at NOAA</a></li> <li><a href="">Banque Africaine de Données Polliniques/African Pollen Database</a></li> </ul> <h3 id="conclusions">Conclusions</h3> <p>As for <a href="">dendrochronological data</a>, there is a serious misunderstanding by universities and research centers of their role in society as places of research, innovation that is <em>available for everyone</em>. In other words, academia is a closed system producing data (at very high costs for society) that are only available inside its walls, but it’s all done with public money.</p> <h3 id="appendix-using-pollen-data">Appendix: Using pollen data</h3> <p>Pollen data are usually presented in forms of synthetic charts where both stratigraphic data and quantitative pollen data are easily readable. Each “column” of the chart stands for a species or <em>genus</em>. You <em>can</em> create this kind of visualization with free software tools.</p> <p>The <a href="">stratigraph</a> package for <a href="">R</a> can do</p> <blockquote> <p>plotting and analyzing paleontological and geological data distributed through through time in stratigraphic cores or sections. Includes some miscellaneous functions for handling other kinds of palaeontological and paleoecological data.</p> </blockquote> <p><a href="">A look at pollen data in the Old World</a> was originally published by Stefano Costa at <a href=""></a> on June 15, 2010.</p> <![CDATA[The flow of data]]> 2010-06-14T00:00:00+02:00 2010-06-14T00:00:00+02:00 Stefano Costa <div class="posterous_autopost"><p>How is that citizens have no access to entire datasets that are used <br />every day by thousands of people? Here is a draft of a possible <br />explanation, simplistic but not without sense.</p> <p><a href=""><img src="" width="500" height="469" /></a> </p> <p style="font-size: 10px;"> <a href="">Posted via email</a> from <a href="">steko</a> </p> </div> <p><a href="">The flow of data</a> was originally published by Stefano Costa at <a href=""></a> on June 14, 2010.</p> <![CDATA[Developing a vocal language. Standing three miles apart.]]> 2010-06-14T00:00:00+02:00 2010-06-14T00:00:00+02:00 Stefano Costa <div class="posterous_autopost"><p>Tonight I was walking along a country road near my house, almost in the dark. Despite the highway that runs at less than 500 meters from there, there was an unusual moment of silence (probably everyone else in Italy was staring at the TV), and I suddenly realized that <em>with that silence</em> it would be possible for me to hear someone crying out loud from the Torre del Mangia &mdash; literally three miles away from there. Or viceversa, if you like.</p> <p>It&rsquo;s not that different from how the <em>muezzin</em> is spreading his voice and prayers. In a pre-industrial society, there is generally speaking much more silence than now. As a consequence, you can hear voices and sounds from far distances.</p> <p>Now translate this concept in &hellip; 40,000 BP and imagine how you would use your voice to communicate with someone else. The usual theory about the development of human language deals with social practices like sitting around the fire, etc. that happen while being in the same place. That is fine, but to me it doesn&rsquo;t explain everything: the same people had to communicate also during the day, and if they were developing a language that would fit their needs, we may suppose they used it during hunting and catching as well. My idea is that in this way the language that comes out is restricted by the use they made of it: if it was for communicating from three miles away, it had to be made of distinct and recognizable sounds. Thus, in a sense, a simpler language than what can be used when sitting around the fire.</p> <p>Following this line of reasoning, only with new habits and the abandonment of nomadic life a more complex language would have been developed. And, of course, this might as well imply that shepherds would have continued to use such a language, or at least such</p> <p>I&rsquo;m perfectly aware that what I have written hasn&rsquo;t a single link to reality (and I don&rsquo;t know anything about language), but it was certainly more interesting than watching soccer and I had a nice walk in the dark.</p> <p style="font-size: 10px;"> <a href="">Posted via email</a> from <a href="">steko</a> </p> </div> <p><a href="">Developing a vocal language. Standing three miles apart.</a> was originally published by Stefano Costa at <a href=""></a> on June 14, 2010.</p> <![CDATA[How Spatialite can be used easily within a Java class (and within a JAVA Application)]]> 2010-06-03T00:00:00+02:00 2010-06-03T00:00:00+02:00 Luca Bianconi <p>After several days of hard studying and hacking, I could discover <strong>how to use</strong> <a href="">SPATIALITE</a> from a Java class or less generically from a Java application.</p> <p>I should really thank <a href="">Taro L. Saito</a> and <a href="">A. Furieri</a> for having helped me in accomplishing this important task for implementing my project for Google Summer Of Code 2010.</p> <p>I propose you’re reading a simple example for explaining how could I work with <strong>sqlite</strong> and <strong>spatialite</strong> from a sample java Class. I add also all the dependencies and other stuffs necessary for using it.</p> <p>First of all you need to take provide yourself with the libraries you’re going to use. They are:</p> <p>1 - <a href="">SQLiteJDBC</a> : that’s the java driver for using sqlite and its database files 2 - <a href="">SPATIALITE</a> : that’s where to find the SpatiaLite library whose functionalities you want to include within your Java class.</p> <p>You should make a class like the one proposed here as example, include in the classpath the SQLiteJDBC driver (that is the SQLite diver for Java ) and finally activate SPATIALITE from the code with the “classical” code for loading any extension in SqLite (that is : SELECT load_extension(path_to_my_lib) ).</p> <p>Here you can find some sample code working on my Ubuntu 9.10.</p> <p><code lang="java"></code></p> <p>import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import org.sqlite.SQLiteConfig;</p> <p>public class Test { public static void main(String[] args) throws ClassNotFoundException, SQLException { try { Class.forName(“org.sqlite.JDBC”);</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> Connection connection = null; SQLiteConfig config = new SQLiteConfig(); config.enableLoadExtension(true); connection = DriverManager.getConnection("jdbc:sqlite:spatialiteDB.db", config.toProperties()); Statement statement = connection.createStatement(); ResultSet rs = statement.executeQuery("SELECT Geometry FROM Towns"); while( { System.out.println("Type = " + rs.getString("Geometry")); } statement.execute("select load_extension('/home/luca/workspace/SqliteTests/lib/ext/')"); // ResultSet rs2 = statement.executeQuery("SELECT PK_UID, Area(Geometry), AsText(Centroid(Geometry)), Dimension(Geometry), GeometryType(Geometry) FROM Regions ORDER BY Area(Geometry) DESC LIMIT 5;"); while( { System.out.println(" = = = = = = = = = = = = "); System.out.println("Id = " + rs2.getString(1) ); System.out.println("Id = " + rs2.getString(2) ); System.out.println("Id = " + rs2.getString(3) ); System.out.println("Id = " + rs2.getString(4) ); System.out.println("Id = " + rs2.getString(5) ); } statement.close(); }catch (Exception e) { System.out.println("ERROR "+ e.getMessage()); } </code></pre></div></div> <p>} }</p> <p>&lt;/code&gt;</p> <p>Thanks again Sandro and Taro. It works now!</p> <p><a href="">How Spatialite can be used easily within a Java class (and within a JAVA Application)</a> was originally published by Luca Bianconi at <a href=""></a> on June 03, 2010.</p> <![CDATA[Please sign to save the Italian Archaeological School at Athens]]> 2010-05-30T00:00:00+02:00 2010-05-30T00:00:00+02:00 Stefano Costa <p>We're unfortunately getting used to this kind of announcements about cuts for research and education bodies. Let's not get used to culture as a minor, unessential ingredient of our society.</p> <blockquote><p><em>Anche quest'anno la cultura dovrà pagare le mancate riforme strutturali del paese. La SAIA è tra gli enti, istituti e fondazioni che non riceveranno più finanziamenti dallo Stato, decretando così la morte della ricerca archeologica italiana in Grecia, che dura da più di 100 anni, e la fine della più importante scuola di formazione archeologi italiani.</em></p> <p><em>Da oltre un secolo, dapprima come spedizione scientifica di singoli studiosi, poi come Missione stabile ed, infine, nella qualità di Sede ateniese per ricerche e scavi archeologici in Grecia e nelle aree di civiltà ellenica e per la formazione e la specializzazione di giovani studiosi, la Scuola Archeologica Italiana di Atene è il punto di riferimento di tutti gli archeologi e gli storici dell'antichità che dalle Università, dal CNR o dalle Soprintendenze svolgono attività di ricerca in Grecia.</em></p> <p><em>Nata per favorire l'alta formazione dei funzionari delle Soprintendenze archeologiche italiane (ruolo che ha svolto in passato in modo egregio e che continuerà a svolgere in futuro) e come centro di coordinamento delle Missioni italiane in Grecia (e per un certo tempo anche in Oriente) la Scuola ha costituito sin dall'inizio la sintesi tra due funzioni basilari: formazione e ricerca, ospitando anche laureati in architettura che si occupano di restauro, conservazione e studio dei monumenti.</em></p></blockquote> <p>If you can, please sign the <a href="">petition</a> to save the Italian Archaeological School at Athens.</p> <!--break--> <p><a href="">Please sign to save the Italian Archaeological School at Athens</a> was originally published by Stefano Costa at <a href=""></a> on May 30, 2010.</p> <![CDATA[Compiling gvSIG 2.0 with Ubuntu 10.04 LTS (& 9.10) (1st part) ]]> 2010-05-16T00:00:00+02:00 2010-05-16T00:00:00+02:00 Luca Bianconi <p>CONFIGURE JAVA It needs having installed on your system a Sun Java 5 compatible compiler. You can install it parallel to the usual open-jdk, usually already present on your machine. Check that’s not already on your machine for any reason with:</p> <p><code class="language-plaintext highlighter-rouge">java -version</code></p> <p>If you work with Lucid Lynx: go to your sources.list file and uncomment:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>deb lucid partner deb-src lucid partner </code></pre></div></div> <p>You have to follow some easy steps:</p> <p><code class="language-plaintext highlighter-rouge">sudo apt-get install sun-java6-jdk</code> <code class="language-plaintext highlighter-rouge">sudo update-alternatives --config java</code> and chose the sun-java one Set two values into your <code class="language-plaintext highlighter-rouge">~/.bash_profile</code> or your <code class="language-plaintext highlighter-rouge">~/.bashrc</code> (according to your system you could find any of them) :</p> <ul> <li><code class="language-plaintext highlighter-rouge">export JAVA_HOME=&lt;path to the folder of your jdk&gt;</code> (ex. )</li> <li><code class="language-plaintext highlighter-rouge">export PATH=$PATH:&lt;path to the bin folder within your jdk's dir&gt;</code> (ex. )</li> </ul> <p>CONFIGURE MAVEN</p> <ul> <li>create an “ .m2/ “ folder within your home directory</li> <li>create a “ repository/ “ folder within the .m2/ folder you’ve just created</li> <li> <p>create a text file called “ settings.xml “ in the .m2/ folder you’ve created within your home/ directory with the following content:</p> <p>native_platform=linux native_distribution=Ubuntu-9.10 native_compiler=gcc4 native_arch=i386 native_libraryType=dynamic export native_classifier=${native_platform}-${native_distribution}-${native_compiler}-${native_arch}-${native_libraryType }</p> </li> </ul> <p>Well, you should change each value with the one corresponding to your machine’s values!</p> <p>INSTALL LIBRARIES</p> <p>Install last JNI libraries and dependencies downloading version of org.gvsig.maven.base and executing <code class="language-plaintext highlighter-rouge">mvn install</code> from the folder <code class="language-plaintext highlighter-rouge">org.gvsig.maven.base/</code>.</p> <p>CONFIGURE ECLIPSE’s WORKSPACE AND COMPILING a GVSIG Set the following properties:</p> <ul> <li>Window &gt; Preferences: General &gt; Workspace &gt; Text file encoding = ISO-8859-1</li> <li>Window &gt; Preferences &gt; Java &gt; Compiler &gt; Compiler Compliance Level = 1.5 (or 5.0) You must have the official java sun jre 1.5 compatible (that means 1.6 or &gt; but with 1.5 compliance according to Java compiler)</li> <li>Window &gt; Preferences &gt; General &gt; Network Connection set up to Manual or System</li> <li>download last gvSIG from with Window &gt; Open Perspective &gt; Other : SVN Respository Exploring</li> <li>Window &gt; Show view &gt; ant</li> <li>execute the build.xml via ant perspective, following commands: <code class="language-plaintext highlighter-rouge">mvn-install mvn-configure-eclipse-workspace</code></li> <li>Exit Eclipse and open it again</li> <li>Window &gt; Preferences : Java &gt; Build Path &gt; Classpath Variables <code class="language-plaintext highlighter-rouge">M2_REPO = [USER_HOME]/.m2/repository</code></li> </ul> <p>Execute build/projects/gvsig-standard/build.xml from ant perspective:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>svn.checkout.all (it downloads and compiles a gvSIG standard version) </code></pre></div></div> <p>If everything goes right, after having got BUILD SUCCESSFULL, with build project selected click on run java application. You will be able to select which platform to chose for your gvSIG.</p> <p>REFERENCES</p> <ul> <li><a href=";jsessionid=fd2e50665d22d051fea12f928c55?_33_redirect=%2Fweb%2Fguest%2Ftechnology-blog%3Fp_p_id%3D58%26p_p_lifecycle%3D0%26p_p_state%3Dnormal%26p_p_mode%3Dview%26p_p_col_id%3Dcolumn-2%26p_p_col_pos%3D1%26p_p_col_count%3D4%26_58_struts_action%3D%252Flogin%252Fforgot_password">;jsessionid=fd2e50665d22d051fea12f928c55?_33_redirect=%2Fweb%2Fguest%2Ftechnology-blog%3Fp_p_id%3D58%26p_p_lifecycle%3D0%26p_p_state%3Dnormal%26p_p_mode%3Dview%26p_p_col_id%3Dcolumn-2%26p_p_col_pos%3D1%26p_p_col_count%3D4%26_58_struts_action%3D%252Flogin%252Fforgot_password</a></li> <li><a href=""></a></li> <li><a href=""></a></li> <li><a href=""></a></li> <li><a href=""></a></li> </ul> <p><a href="">Compiling gvSIG 2.0 with Ubuntu 10.04 LTS (& 9.10) (1st part) </a> was originally published by Luca Bianconi at <a href=""></a> on May 16, 2010.</p> <![CDATA[GSOC 2010: thoughts in freedom about development. Me & gvSIG.]]> 2010-05-12T00:00:00+02:00 2010-05-12T00:00:00+02:00 Luca Bianconi <p>It’s time to get into this new adventure and sailing towards new lands! I’ve been accepted as student for the Google Summer of Code 2010, and with me the IOSA team.</p> <p>The project is really interesting (giving support to SqLite and SpatiaLite) and gvSig is related, not less then other gis, to archaeological researches.</p> <p>It’s quite scaring thinking about the huge quantity of work it needs for fullfilling my task and at the same time it’s really funny and amazing realising that together with the sweat of coding we’ll get for sure a lot of fun!</p> <p>The gvSig team that helps me with the project looks like being very pleasant and nice.</p> <p>For mantaining some trace of the steps I’m going to follow and keeping a rough documentation of the most interesting links and approaches and codes I find all around this period of work I’ve decided to write some posts on, not to forget this fantastic adventure I’m going to begin.</p> <p><a href="">GSOC 2010: thoughts in freedom about development. Me & gvSIG.</a> was originally published by Luca Bianconi at <a href=""></a> on May 12, 2010.</p> <![CDATA[“Diritti d'autore e banche dati per i beni culturali”: video footage available]]> 2010-05-12T00:00:00+02:00 2010-05-12T00:00:00+02:00 Stefano Costa <p>We have finally managed to put online the video footage of the seminar “Diritti d’autore e banche dati per i Beni Culturali” that we had last year in Genoa on 7 May 2009, organized by <a href="">grupporicerche</a>. You can read a brief report and watch videos at this URL: - video files are in the open and free <a href="">OGG Theora</a> format, and hosted at the <a href="">Internet Archive</a>. It’s all in Italian, we haven’t managed to create subtitles yet. If you need help, don’t hesitate to ask. We really hope that this material will be useful to anyone trying to push for open archaeology. More meetings like this one will certainly help clarifying the main issues in the field of copyright assignment and dissemination of archaeological data under free licenses.</p> <p><a href="">“Diritti d'autore e banche dati per i beni culturali”: video footage available</a> was originally published by Stefano Costa at <a href=""></a> on May 12, 2010.</p>