A collection of RDF libraries for JavaScript
This documentation covers the following graphy packages:
@graphy/content.nt.read
@graphy/content.nt.scan
@graphy/content.nt.scribe
@graphy/content.nt.write
@graphy/content.nq.read
@graphy/content.nq.scan
@graphy/content.nq.scribe
@graphy/content.nq.write
@graphy/content.ttl.read
@graphy/content.ttl.scribe
@graphy/content.ttl.write
@graphy/content.trig.read
@graphy/content.trig.scribe
@graphy/content.trig.write
@graphy/content.xml.scribe
.read
– for reading RDF from a string or stream
.scan
– advanced users – for reading RDF from a stream or string using multiple threads
.scribe
– for fast and simple RDF serialization to writable stream.write
– for dynamic and stylized RDF serialization to writable stream
'c3'
(full-mode)'c3r'
(strict-mode)'c4'
(full-mode)'c4r'
(strict-mode)'quad'
'prefixes'
'array'
(includes all other types)'comment'
'newlines'
These modules offer two distinct approaches to binding event listeners. The traditional .on(...)
approach will allow you to attach event listeners on the Transform
object that is returned by the module function. This style was popularized by node.js, however this actually does incur a non-trivial amount of overhead for setup, teardown, and emittance.
A sleeker alternative to the .on(...)
approach is the inline events style. Simply provide a callback function for each event you want to attach a listener to at the time the module function is called, passing a direct reference to at most one callback function per event. This approach allows the module to bypass the EventEmitter
methods and can result in slightly better performance; it is also more pleasant to look at. However, it might not be suitable for users who need the ability to add multiple event listeners, to remove listeners, or to add listeners at a later time.
See the read
examples for a demonstration of the two styles of attaching event listeners.
The following code block demonstrates three different ways to access these modules (shown here for the read
verb):
// stand-alone readers
const nt_read = require('@graphy/content.nt.read');
const nq_read = require('@graphy/content.nq.read');
const ttl_read = require('@graphy/content.ttl.read');
const trig_read = require('@graphy/content.trig.read');
// readers via named access from the graphy 'super module'
const graphy = require('graphy');
const nt_read = graphy.content.nt.read;
const nq_read = graphy.content.nq.read;
const ttl_read = graphy.content.ttl.read;
const trig_read = graphy.content.trig.read;
// readers via Content-Type query from the graphy 'super module'
const graphy = require('graphy');
const nt_read = graphy.content('application/n-triples').read;
const nq_read = graphy.content('application/n-quads').read;
const ttl_read = graphy.content('text/turtle').read;
const trig_read = graphy.content('application/trig').read;
This section documents the ‘verb’ part of each content module. A ‘verb’ refers to the fact that the module’s export is itself a function.
scribe
and write
verbsThe scribe
and write
verbs are both for serializing RDF to a writable stream. However, scribe
is the more basic serializer built for speed, while write
is the more advanced serializer built to support rich features such as stylized output (e.g., custom spacing), serializing comments, RDF collections, and so forth.
Additionally, write
employs several safety checks that help prevent serializing malformed RDF from faulty write input (e.g., literals in subject position, blank nodes or literals in predicate position, invalid IRI strings, and so on), whereas scribe
does not perform such safety checks.
The scribe
verb supports the following WritableDataEvent types:
'prefixes'
'quad'
'c3r'
'c4r'
'comment'
'newlines'
The write
verb supports the following WritableDataEvent types (* = difference from scribe):
'prefixes'
'quad'
'c3'
*'c3r'
'c4'
*'c4r'
'comment'
'newlines'
read
([input: string | stream][, config:
ReadConfig
])
new Transform<string, Quad>
(accepts utf8-encoded strings on its writable side, pushes Quad objects on its readable side)Accessible via the following modules:
@graphy/content.nt.read
@graphy/content.nq.read
@graphy/content.ttl.read
@graphy/content.trig.read
Read from a Turtle file in Node.js:
cons fs = require('fs');
const ttl_read = require('@graphy/content.ttl.read');
fs.createReadStream('input.ttl')
.pipe(ttl_read())
.on('data', (y_quad) => {
console.dir(y_quad.isolate());
})
.on('eof', () => {
console.log('done!');
});
Read a Turtle string:
const ttl_read = require('@graphy/content.ttl.read');
ttl_read(`
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
<#spiderman> a foaf:Person ;
foaf:name "Spiderman" .
`, {
// whew! simplified inline events style ;)
data(y_quad) {
console.dir(y_quad);
},
eof(h_prefixes) {
console.log('done!');
},
})
Overloaded variants:
read([config:
#ReadConfigNoInput
])
read(input_string: string[, config:
#ReadConfigNoInput
])
read(config).end(input_string, 'utf-8');
read({...config, input: {string:input_string}});
read(input_stream:
ReadableStream<string>
[, config:
#ReadConfigNoInput
])
input_stream.pipe(read(config));
read({...config, input: {stream:input_stream}});
read(config:
#ReadConfigWithInput
)
EXPERIMENTAL! The
scan
verb is currently experimental and has limited test coverage.
scan
([input: string | stream][, config:
ScanConfig
])
Accessible via the following modules:
@graphy/content.nt.scan
@graphy/content.nq.scan
Count the number of statements in an N-Quads document from stdin in Node.js:
const nq_scan = require('@graphy/content.nq.scan');
// create the scanner instance, provide the input stream as the first argument (equivalent to using '.import' method on returned instance)
nq_scan(process.stdin, {
// the code to run on each thread (creates a function that will be called with special arguments)
run: /* syntax: js */ `
(read, err, update, submit) => {
let c_stmts = 0;
return read({
data() {
c_stmts += 1;
},
error(e_read) {
err(e_read);
},
eof() {
submit(c_stmts);
},
});
}
`,
// how to combine the results received from each call to 'submit'
reduce: (c_stmts_a, c_stmts_b) => c_stmts_a + c_stmts_b,
// the final value to report
report(c_stmts) {
console.log(`${c_stmts} statements total.`);
},
});
Convert N-Triples from stdin to Turtle on stdout in Node.js:
const nt_scan = require('@graphy/content.nt.scan');
// create the scanner instance
nt_scan(process.stdin, {
// use the 'scribe' preset
preset: 'scribe',
// 'db_chunk' will be a Buffer in this preset, simply write it to stdout
update(db_chunk) {
process.stdout.write(db_chunk);
},
// this will fire once the scanner is done
report() {
// do stuff...
},
})
// example using the RDFJS Sink '.import' method to provide input stream
.import(process.stdin);
Overloaded variants:
scan([config:
#ScanConfigNoInput
])
scan(input_stream:
ReadableStream<Buffer>
[, config:
#ScanConfigNoInput
])
scan(config).import(input_stream);
scan({...config, input: {stream:input_stream}});
scan(input_string: string[, config:
#ScanConfigNoInput
])
scan({...config, input: {string:input_string}});
scan(config:
#ScanConfigWithInput
)
scribe
([config:
ScribeConfig
])
new Scriber
which transforms WritableDataEvent objects or @RDFJS/Quads on the writable side into utf8-encoded strings on the readable side. The transformation an object undergoes from the writable side to the readable side will vary depending on the capabilities of the specific output RDF format.Accessible via the following modules:
@graphy/content.nt.scribe
@graphy/content.nq.scribe
@graphy/content.ttl.scribe
@graphy/content.trig.scribe
@graphy/content.xml.scribe
Serialize some RDF data to Turtle on-the-fly:
const ttl_scribe = require('@graphy/content.ttl.scribe');
const factory = require('@graphy/core.data.factory');
let ds_scriber = ttl_scribe({
prefixes: {
dbr: 'http://dbpedia.org/resource/',
ex: 'http://ex.org/',
},
});
ds_scriber.on('data', (s_turtle) => {
console.log(s_turtle+'');
});
// write an RDFJS quad
ds_scriber.write(factory.quad(...[
factory.namedNode('http://dbpedia.org/resource/Banana'),
factory.namedNode('http://www.w3.org/1999/02/22-rdf-syntax-ns#a'),
factory.namedNode('http://dbpedia.org/ontology/Plant'),
]));
// or write using a concise-triples struct in strict-mode (c3r)
ds_scriber.write({
type: 'c3r',
value: {
'dbr:Banana': {
'ex:color': ['dbr:Yellow'],
},
},
});
Prints:
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix ex: <http://ex.org/> .
dbr:Banana a dbo:Plant .
dbr:Banana ex:color dbr:Yellow .
write
([config:
WriteConfig
])
new Writer
which transforms WritableDataEvent objects or RDFJS Quads on the writable side into utf8-encoded strings on the readable side. The transformation an object undergoes from the writable side to the readable side will vary depending on the capabilities of the specific output RDF format.Accessible via the following modules:
@graphy/content.nt.write
@graphy/content.nq.write
@graphy/content.ttl.write
@graphy/content.trig.write
Serialize some RDF data to Turtle on-the-fly:
const ttl_write = require('@graphy/content.ttl.write');
let ds_writer = ttl_write({
prefixes: {
dbr: 'http://dbpedia.org/resource/',
ex: 'http://ex.org/',
},
});
ds_writer.on('data', (s_turtle) => {
console.log(s_turtle+'');
});
ds_writer.write({
type: 'c3',
value: {
'dbr:Banana': {
'ex:lastSeen': new Date(),
},
},
});
Prints:
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix ex: <http://ex.org/> .
dbr:Banana ex:lastSeen "2019-01-16T06:59:53.401Z"^^<http://www.w3.org/2001/XMLSchema#dateTime> .
Methods:
write(data:
WritableDataEvent
|
@RDFJS/Quad
[, ignore: any][, function: callback])
ignore
corresponds to the encoding
parameter since the Transform is in objectMode.import(
@RDFJS/Stream
)
implements @RDFJS/Sink.import
Methods:
The definition for all possible events emitted during content reading. Please see this note about events to understand how this definition applies to both the traditional .on()
-style of event binding as well as the inline-style.
Events:
readable()
via @node.js/stream.Readable#event-readable
base(iri: string)
iri
is the full IRI of the new base. ttl_read('@base <http://example.org/vocabulary/> .', {
base(p_iri) {
p_iri; // 'http://example.org/vocabulary/'
},
});
prefix(id: string, iri: string)
id
will be the name of the prefix without the colon and iri
will be the full IRI of the associated mapping. ttl_read('@prefix dbr: <http://dbpedia.org/resource/> .', {
prefix(s_id, p_iri) {
s_id; // 'dbr'
p_iri; // 'http://dbpedia.org/resource/'
},
});
comment(comment: string)
#
symbol until end-of-line) as soon as it is parsed. // inline event style (less overhead)
ttl_read(`
# hello world!
<#banana> a <#Fruit> .
`, {
comment(s_comment) {
s_comment; // ' hello world!'
},
});
// attach event listener style (more overhead)
let ds_read = ttl_read(`
# hello world!
<#banana> a <#Fruit> .
`);
ds_read.on('comment', (s_comment) => {
s_comment; // ' hello world!'
});
data(quad:
Quad
)
via @node.js/stream.Readable#event-data
// inline event style (less overhead)
ttl_read('<#banana> a <#Fruit> .', {
data(y_quad) {
y_quad.predicate.value; // 'http://www.w3.org/1999/02/22-rdf-syntax-ns#'
},
});
// attach event listener style (more overhead)
let ds_read = ttl_read('<#banana> a <#Fruit> .');
ds_read.on('data', (y_quad) => {
y_quad.predicate.value; // 'http://www.w3.org/1999/02/22-rdf-syntax-ns#'
});
enter(graph:
Term
)
{
is read. graph
will either be a NamedNode, BlankNode or DefaultGraph. // only inspect triples within a certain graph
let b_inspect = false;
trig_read(ds_input, {
enter(y_graph) {
if(y_graph.value === 'http://target-graph') b_inspect = true;
},
exit(y_graph) {
b_inpsect = false;
},
data(y_quad) {
if(b_inspect) { // much faster than comparing y_quad.graph to a string!
// do something with triples
}
},
});
exit(graph:
NamedNode
)
}
is read. graph
will either be a NamedNode, BlankNode or DefaultGraph.progress(delta: integer)
delta
will reflect the number of characters that were consumed from the input which resulted in a change to the reader’s internal state (i.e., incomplete tokens must wait for next chunk to be terminated). This event offers a nice way to provide progress updates to the user, however this would require knowing ahead of time how many characters in total are contained by the input, which will always be less than or equal to the total number of bytes of the document depending on how many surrogate pairs are present in the utf8-encoded string. This event also provides hints to resource-hungry applications when it might be an opportunistic time to perform blocking tasks. This event will also be called right before the 'eof'
event with a delta
equal to 0
.eof(prefixes:
#hash/prefix-mappings
)
'finish'
event is emitted; useful for obtaining the final prefix mappings 'prefixes'
. This event indicates that the input has been entirely consumed (i.e., no errors occurred while reading) and the only events that will follow are the 'finish'
and 'end'
events.finish()
via @node.js/stream.Writable#event-finish
end()
via @node.js/stream.Readable#event-end
'data'
event listener), this event will never fire. If you are only interested in consuming the input (e.g., for validation) use the 'eof'
event instead.
Caution! Be aware that this event only fires if the transform is being read. For a clear indication that the input has been consumed, it is recommended to use the
'eof'
event.
error(err: Error)
via @node.js/stream.Readable#event-error
update(msg: any, thread: int)
update()
function to emit some information while reading is happening. Useful when the task objective is to write streaming data to some output or for providing progress feedback to user. thread
will be 0
if the call came directly from the main thread, otherwise will be an index corresponding to the worker starting at 1
.report(value: any)
submit()
call, and reduced those values to a single value
using the .reduce
function passed in the ScanConfig.error(err: Error, thread: int)
thread
will be 0
if the call came directly from the main thread, otherwise will be an index corresponding to the worker starting at 1
.The definition for all possible events emitted during content writing. Please see this note about events to understand how this definition applies to both the traditional .on()
-style of event binding as well as the inline-style.
Events:
warning(message: string)
An interface that defines the config object passed to a content reader.
Options:
dataFactory
: DataFactory=@graphy/core.data.factory
– DataFactory implementation that will be used to create all Terms and Quads. The default implementation provided by graphy tends to perform a tad better and enables readers to create specialized Terms such as Booleans, Integers, Decimals, Doubles, and so on.baseURI | baseUri | baseIRI | baseIri
: string
– sets the starting base URI for the RDF document.relax
: boolean=false
– by default, the contents of tokens are validated, e.g., checking for invalid characters in IRIs, literals, and so on. The stream will emit an 'error'
event if an invalid token is encountered. Setting the relax
option to true
will permit as wide a character range within tokens as possible (i.e., it will allow any characters not in the lookahead table). Using the relax
option may be useful when trying to recover improperly formatted Turtle documents, however it also yields slightly faster parsing for valid documents as well since normal validation adds overhead to reading.maxTokenLength
: number=2048
– defines the maximum number of characters to expect of any token other than a quoted literal. This option only exists to prevent invalid input from endlessly consuming the reader when using a stream as input. By default, this value is set to 2048, which is more than the recommended maximum URL length. However, you may wish to set this value to Infinity
if you never expect to encounter invalid syntax on the input stream.maxStringLength
: number=Infinity
– defines the maximum number of characters to expect for any string literal. This option only exists to prevent invalid input from endlessly consuming the reader (such as a long-quoted literal """ that never ends...
) when using a stream as input. By default, this value is set to Infinity characters (no limit). However, you may wish to set this value to some reasonable upper-bound limit (such as 65536
== 64 KiB) if you want to prevent possible memory leaks from invalid inputs or need your program to be able to gracefully recover from such syntax errors.Options:
Required:
input
: UseInputString
|
UseInputStream
An interface that defines the config object passed to a content scanner.
Options:
preset
: string
– should be one of the following:
'count'
– counts the number of statements in the document, reduces by taking sum of submitted values and calls report()
with final result.'scribe'
– converts each statement into Turtle (or TriG) using the corresponding content scriber. Calls update()
with each streaming chunk of serialized output. If the update comes from the main thread (indicated by the second argument being 0
) then the received value is a string, otherwise it is a Buffer. The reason for this discrepancy is to minimize redundant string encoding/decoding. The chunk originates as a string and the workers must encode them before transferring to main, whereas the main does not need to encode the string. You, the user, either need strings or Buffers – choose to encode the strings from main OR decode the Buffers from workers. Alternatively, you could simply write the value to a WritableStream (if that is the destination anyway) and it will take care of the rest.'ndjson'
– converts each statement into Newline-Delimited JSON. Behaves similar to 'scribe'
in the way that streaming chunks of serialized output are sent to main thread.relax
: boolean=false
– see .relax
option in ReadConfig.run
: #string/js-function-scan
– a string of JavaScript code that will be eval
‘d on each worker thread and once on the main thread. Is called with the following arguments:
read(...)
– the ‘suggested’ content reader verb to use (will be either the N-Triples or N-Quads reader depending on the scanner format). Call this argument as a function to construct the ReadableStream that needs to be returned by your custom function code.err(what: Error)
– function that will send what
to the main thread and eventually propagate to the scanner’s 'error'
event listener.update(value: any[, transfers: Array])
– function that will send value
to the main thread using .postMessage()
and eventually propagate to the scanner’s 'update'
event listener. Use transfers
to specify transferable objects such as large TypedArrays or ArrayBuffers.submit(value: any[, transfers: Array])
– function that will send value
to the main thread using .postMessage()
and eventually propagate to the scanner’s .reduce()
function. Calling submit
will effectively terminate the worker as it indicates the task has completed. Use transfers
to specify transferable objects such as large TypedArrays or ArrayBuffers.user
: any
– custom value of any type supported by the structured clone algorithm that was passed to the scanner’s .user
property OR returned from calling the .user
property if it was a function. Useful for passing data (such as prefix mappings) or custom options to each thread.isWorker
: boolean
– indicates whether or not the code is running on a worker thread. Useful for determining if data passed to update
/submit
needs to cross threads in which case using ArrayBuffers to transfer may be more efficient.reduce(accumulator: any, current: any) => output: any
– callback function that reduces two input arguments into a single output value. On the first invocation, the accumulator
argument will either be the .initial
property provided to the scanner’s constructor, or, if nothing was provided to .initial
property, the value sent to submit()
by the main thread.receive(value: any, thread: int) => output: any
– callback function that takes value
as it was received from a worker and ‘recreates’ the intended object before it is sent to .reduce
. Useful for unmarshalling instances from workers.threads
: int
– manually set the total number of threads to use (including the main thread).An interface that defines the config object passed to a content writer.
Options:
prefixes
: #hash/prefix-mappings
– prefix mappings to use in order to expand the concise-term strings within concise-quad hashes as they are written. These prefixes will also be used to create prefix statements and terse terms on the output stream whenever possible (e.g., for Turtle and TriG documents).lists
: #ListConfig
– globally sets the predicates to use when serializing list structures (defaults to using RDF Collections).style
– configure stylistic options to customize serialization for the given output format.
.indent
: string='\t'
– sets the indentation string to use..graphKeyword
: boolean | string=''
– only supported by TriG writer. If true
, will write GRAPH
before each graph open block. If a string
is given, it must match /^graph$/i
..simplifyDefaultGraph
: boolean=false
– only supported by TriG writer. If true
, will omit serializating the surrounding optional graph block for quads within the default graph (see Example 3 from TriG specification)..directives
: string='turtle'
– acceptable values are 'turtle'
, 'Turtle'
, 'TURTLE'
, 'sparql'
, 'Sparql'
, or 'SPARQL'
, to indicate both the token type (i.e., Turtle-style @prefix
or SPARQL-style prefix
) and the capitalization (i.e., lower-case @prefix
/ prefix
; pascal-case @Prefix
/ Prefix
; or upper-case @PREFIX
/ PREFIX
)..first
: #string/c1
– the predicate to use for specifiying the ‘first’ item of the linked-list structure..rest
: #string/c1
– the predicate to use for specifiying the ‘rest’ item of the linked-list structure..nil
: #string/c1
– the object to use for specifiying the terminating ‘nil’ item of the linked-list structure..width
: int
– if specified, breaks comments longer than the given width
onto multiple lines.Indicates a utf8-encoded string to use as input to a reader.
.string
: string
Indicates a readable stream to use as input to a reader.
.stream
: ReadableStream<string>
An object that describes an event of writable RDF data (including metadata and directives such as prefix mappings, comments, etc.).
.type
: string
– the type of event this object represents, see below.value
: any
– the value of the object.type
should be one of the following:'c3'
– (full-mode) write a set of triples to the output using the full-mode of concise triples.
.value
to be a concise triple hash in full-modeconst factory = require('@graphy/core.data.factory');
const stream = require('@graphy/core.iso.stream');
const ttl_write = require('@graphy/content.ttl.write');
// `stream.source(data).pipe(dst)` is essentially `dst.write(data).end()`
stream.source({
type: 'c3',
value: {
[factory.comment()]: 'banana example'
'dbr:Banana': {
a: 'dbo:Fruit',
'rdfs:label': [
'@en"Banana',
'@fr"Banane',
],
},
},
}).pipe(ttl_write({
prefixes: {
dbr: 'http://dbpedia.org/resource/',
rdfs: 'http://www.w3.org/2000/01/rdf-schema#',
},
})).pipe(process.stdout);
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
# banana example
dbr:Banana rdfs:label "Banana"@en, "Banane"@fr .
'c3r'
– (strict-mode) write a set of triples to the output using the strict-mode of concise triples.
.value
to be a concise triple hash in strict-modeconst factory = require('@graphy/core.data.factory');
const stream = require('@graphy/core.iso.stream');
const ttl_write = require('@graphy/content.ttl.write');
// `stream.source(data).pipe(dst)` is essentially `dst.write(data).end()`
stream.source({
type: 'c3r',
value: {
'dbr:Banana': {
a: ['dbo:Fruit'],
'rdfs:label': [
'@en"Banana',
'@fr"Banane',
],
},
},
}).pipe(ttl_write({
prefixes: {
dbr: 'http://dbpedia.org/resource/',
rdfs: 'http://www.w3.org/2000/01/rdf-schema#',
},
})).pipe(process.stdout);
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
# banana example
dbr:Banana rdfs:label "Banana"@en, "Banane"@fr .
'c4'
– write a set of quads to the output using the full-mode of concise quads.
.value
to be a concise quad hash in full-modeconst factory = require('@graphy/core.data.factory');
const stream = require('@graphy/core.iso.stream');
const trig_write = require('@graphy/content.trig.write');
// `stream.source(data).pipe(dst)` is essentially `dst.write(data).end()`
stream.source({
type: 'c4',
value: {
[factory.comment()]: 'default graph',
'*': {
[factory.comment()]: 'banana example',
'dbr:Banana': {
[factory.comment()]: 'did i mention that comments work here too?',
'rdfs:label': [
'@en"Banana',
'@fr"Banane',
],
[factory.comment()]: 'pretty cool i know ;)',
},
},
[factory.comment()]: 'another graph (blank node)',
'_:': {
'dbr:Banana': {
'dbr:color': '"yellow',
},
},
},
}).pipe(trig_write({
prefixes: {
dbr: 'http://dbpedia.org/resource/',
rdfs: 'http://www.w3.org/2000/01/rdf-schema#',
},
tokens: {
graph: true, // output `GRAPH` tokens in TriG format
},
})).pipe(process.stdout);
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
# default graph
GRAPH {
# banana example
dbr:Banana
# did i mention that comments work here too?
rdfs:label "Banana"@en, "Banane"@fr ;
# pretty cool i know ;)
.
}
# another graph (blank node)
GRAPH _:05565745_3fb2_4378_b4d0_bedb24d45d55 {
dbr:Banana dbr:color "yellow" .
}
'c4r'
– write a set of quads to the output using the strict-mode of concise quads.
.value
to be a concise quad hash in strict-modeconst factory = require('@graphy/core.data.factory');
const stream = require('@graphy/core.iso.stream');
const trig_write = require('@graphy/content.trig.write');
// `stream.source(data).pipe(dst)` is essentially `dst.write(data).end()`
stream.source({
type: 'c4r',
value: {
// default graph
'*': {
'dbr:Banana': {
'rdfs:label': [
'@en"Banana',
'@fr"Banane',
],
},
},
// another graph (blank node)
'_:': {
'dbr:Banana': {
// notice the value must be an Array
'dbr:color': ['"yellow'],
},
},
},
}).pipe(trig_write({
prefixes: {
dbr: 'http://dbpedia.org/resource/',
rdfs: 'http://www.w3.org/2000/01/rdf-schema#',
},
tokens: {
graph: true, // output `GRAPH` tokens in TriG format
},
})).pipe(process.stdout);
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
GRAPH {
dbr:Banana rdfs:label "Banana"@en, "Banane"@fr .
}
GRAPH _:05565745_3fb2_4378_b4d0_bedb24d45d55 {
dbr:Banana dbr:color "yellow" .
}
'quad'
– write a single RDFJS-compatible quad to the output.
.value
to be a Quadconst factory = require('@graphy/core.data.factory');
const ttl_write = require('@graphy/content.ttl.write');
// procedural style
let y_writer = ttl_write({
prefixes: {
dbr: 'http://dbpedia.org/resource/',
rdfs: 'http://www.w3.org/2000/01/rdf-schema#',
},
});
// pipe to stdout
y_writer.pipe(process.stdout);
// create RDF terms
let yt_subject = factory.namedNode('http://dbpedia.org/resource/Banana');
let yt_predicate = factory.namedNode('http://www.w3.org/2000/01/rdf-schema#label');
let yt_object = factory.literal('Banana', 'en');
// create RDF quad
let y_quad = factory.quad(yt_subject, yt_predicate, yt_object);
// write quad
y_writer.write({
type: 'quad',
value: y_quad,
});
// end stream
y_writer.end();
@prefix dbr: <http://dbpedia.org/resource/> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
dbr:Banana rdfs:label "Banana"@en .
'prefixes'
– updates the current prefix mappings which are used to expand CURIEs found in subsequent concise triple (c3 or c3r) and concise quad (c4 or c4r) hashes. Will also cause the writer to output the given prefix mappings if supported by the underlying RDF format.
.value
to be a #hash/prefix-mappingsconst ttl_write = require('@graphy/content.ttl.write');
// procedural style
let y_writer = ttl_write();
// pipe to stdout
y_writer.pipe(process.stdout);
// write prefix mapping(s)
y_writer.write({
type: 'prefixes',
value: {
demo: 'http://ex.org/demo/',
},
});
// write some data using the new mapping
y_writer.write({
type: 'c3',
value: {
'demo:Test': {
'demo:isWorking': true,
},
},
});
// end stream
y_writer.end();
@prefix demo: <http://ex.org/demo/> .
demo:Test demo:isWorking true .
'array'
– write a series of data events (useful for aggregating events synchronously before going async).
.value
to be an Array<
WritableDataEvent
>
.'comment'
– write a comment to the output stream in the appropriate format. Newlines within the string will become comments on separate lines. Comment-terminating substrings within the string will be escaped.
.value
to be a string
.'newlines'
– write the given number of newlines to the output.
.value
to be a uint
.