Access to the internal LDBC information via its wiki and mailing lists, which includes access to discussion documents, graph user scenario’s, datasets, draft benchmark specifications and software, results and discussions.
+
Right to membership of LDBC task forces. Currently, there are task forces on the Semantic Publishing Benchmark, the Social Network Benchmark Interactive, BI and the Graphalytics workloads, as well as the Graph Query Language task force.
+
Access to research resources at academic partners. This includes the ability to look into research agenda’s provide feedback and establish working relationships with students, as well as arrange targeted internships with MSc and PhD students provided by these partners.
+
+
Annual membership fees (2023)
+
+
Sponsor company/institution: 8,800 GBP
+
Commercial company: 2,200 GBP
+
Non-commercial institution: 1,100 GBP
+
Individual Voting member (standard rate): 165 GBP
+
Individual Voting member (reduced rate for students, etc.): 10 GBP
+
Individual Associate member: no subscription fee
+
+
Note that there is a 2,000 GBP auditing fee per audit for the LDBC for non-sponsor company members. Sponsor companies are exempt from this.
LDBC expects all its members to conscientiously observe the provisions of this Fair Use Policy for LDBC Benchmarks. LDBC-approved auditors must bring this Fair Use Policy for LDBC Benchmarks to the attention of any prospective or actual Test Sponsor. The Board of Directors of LDBC is responsible for enforcing this Policy and any alleged violations should be notified to info@ldbcouncil.org.
+
+
An “LDBC Draft Benchmark®” is a benchmark specification and any associated tooling or datasets, which has been written by an LDBC Task Force or Working Group whose charter includes the goal of achieving adoption of that specification as an LDBC standard, in accordance with Article 33 of the Articles of Association of the Company, “Approval of Standards”.
+
An “LDBC Benchmark®” is an LDBC Draft Benchmark once it has been adopted as an LDBC standard.
+
A result of a performance test can be fairly described as an “LDBC Benchmark Result”, if the test—which may be executed in several runs all of which use the same System Under Test (SUT)—has been successfully audited by an LDBC-approved auditor, and the result is reported as part of an LDBC Benchmark Results set, so it can be interpreted in context.
+
An audit can only be successful if the audited test
+
+
uses a SUT which faithfully implements the mandatory features and chosen optional features of an LDBC Benchmark ,
+
completely exercises and generates results for all the mandatory requirements and chosen optional requirements of the LDBC Benchmark, and
+
is conducted and audited in conformance with all the relevant provisions of the LDBC Byelaws, including the statement of Total Cost of Ownership for the SUT and the reporting of price/performance metrics, such that the reported results can legitimately be used to compare the price-weighted performance of two SUTs.
+
+
+
“LDBC Benchmark Results” is a set of all the results of a successfully audited test. A single LDBC Benchmark Result must be reported as part of such a set.
+
Any description or depiction of a specification that states or implies that it is an LDBC Draft Benchmark or an LDBC Benchmark when that is not the case is an infringement of LDBC’s trademark in the term “LDBC BENCHMARK”, which is registered in several major jurisdictions.
+
The same trademark is infringed by any software which is described or promoted as being an implementation of an LDBC Draft Benchmark or LDBC Benchmark, but which does not faithfully implement the features of or does not support the mandatory requirements of the stated specification.
+
The same trademark is infringed by any report or description of one or more performance test results which are not part of set of LDBC Benchmark Results, or in any other way states or implies that the results are endorsed by or originates from LDBC.
+
LDBC considers the use of that trademarked term with respect to performance test results solely in accordance with these Byelaws to be essential to the purpose and reputation of the Company and its benchmark standards.
+
+
Reporting of LDBC Benchmark Results
+
Once an auditor has approved a performance test result, including all required supporting documentation, as being successfully audited, then the Members Council and the Task Force responsible for the benchmark will be notified. The Board will have the results added to the LDBC web site as an LDBC Benchmark Results set according to the following procedure:
+
+
LDBC members will receive notification of the result via email to their designated contacts within five business days of LDBC receiving the notification.
+
Within five business days of this notice, the LDBC administrator will post the result on the LDBC web site under the rubric “LDBC Benchmark Results” unless the result is withdrawn by the Test Sponsor in the meantime.
+
A result may be challenged and subsequently be withdrawn by the LDBC following a review process as described in Article 7.6.
+
A result that is not challenged within 60 days of its publication will be automatically considered valid and may not be challenged after this time, and this fact will be recorded as part of the website posting of the result.
+
+
Fair Use of the trademark LDBC BENCHMARK
+
Any party wishing to avoid infringement of the trademarked term “LDBC BENCHMARK” should follow the following guidelines relating to its fair use.
+
LDBC encourages use, derived use, study, descriptions, critiques of and suggestions for improvement of LDBC Draft Benchmarks and LDBC Benchmarks. Our benchmark specifications are open-source, and we always welcome new contributors and members. These guidelines are only intended to prevent false or confusing claims relating to performance test results that are intended to be used for product comparisons.
+
+
If your work is derived from an LDBC Draft or standard Benchmark, or is a partial implementation, or if you are using part of one of our standards for a non-benchmarking purpose, then we would expect you to give attribution, in line with our Creative Commons CC-BY 4.0 licence.
+
We would also suggest that you make a statement, somewhere, somehow, that includes one of these phrases “This is not an LDBC Benchmark”, “This is not an implementation of an LDBC Benchmark” or “These are not LDBC Benchmark Results”.
+
We would also suggest that you explain, however briefly, how your work is related to LDBC standards and how it varies from them.
+
+
An example that illustrates these points: you might say something like this in a presentation:
+
+
“We used the LDBC SNB benchmark as a starting point. This isn’t the official LDBC standard: we added four queries because of X, and we don’t attempt to deal with the ACID requirement. The test results aren’t audited, so we want to be clear that this is not an LDBC Benchmark test run, and these numbers are not LDBC Benchmark Results. If you look at this link on the slide I’m showing you can see all the details of how our work is derived from, and varies from, the SNB 2.0 spec.”
+
+
Or you might say:
+
+
“For this example of a GQL graph type we used the LDBC SNB data model. This is nothing to do with the actual LDBC benchmark specification: we just used their data model as a use-case for illustrating what a graph schema might look like. We took this from the SNB 2.0 spec.”
The Financial Benchmark (FinBench) project defines a graph database benchmark targeting financial scenarios such as
+anti-fraud and risk control. It is maintained by the LDBC FinBench Task Force.
+
The benchmark has one workload, Transaction Workload, capturing OLTP scenario with complex read queries that access the neighbourhood of a given node in the graph and write queries that continuously insert or delete data in the graph. Its data sets are available in the Google Drive.
For auditing requests, please reach out at info@ldbcouncil.org. Audits can only be commissioned by LDBC member companies by contracting any of the LDBC-certified auditors. Note that there is a 2,000 GBP auditing fee to be paid for the LDBC for non-sponsor company members. Sponsor companies are exempt from this.
The Graphalytics benchmark is an industrial-grade benchmark for graph analysis platforms such as Giraph, Spark GraphX, and GraphBLAS. It consists of six core algorithms, standard data sets, and reference outputs, enabling the objective comparison of graph analysis platforms.
+
The benchmark harness consists of a core component, which is extendable by a driver for each different platform implementation. The benchmark includes the following algorithms:
+
+
breadth-first search (BFS)
+
PageRank (PR)
+
weakly connected components (WCC)
+
community detection using label propagation (CDLP)
+
local clustering coefficient (LCC)
+
single-source shortest paths (SSSP)
+
+
The choice of these algorithms was carefully motivated, using the LDBC TUC and extensive literature surveys to ensure good coverage of scenarios. The standard data sets include both real and synthetic data sets, which are classified into intuitive “T-shirt” sizes (S, M, L, etc.).
+
Each experiment set in Graphalytics consists of multiple platform runs (a platform executes an algorithm on a data set), and diverse set of experiments are carried out to evaluate different performance characteristics of a system-under-test.
+
All completed benchmarks must go through a strict validation process to ensure the integrity of the performance results.
+
The development of Graphalytics is supported by many active vendors in the field of large-scale graph analytics. Currently, Graphalytics already facilitates benchmarks for a large number of graph analytics platforms, such as GraphBLAS, Giraph, GraphX, and PGX.D, allowing comparison of the state-of-the-art system performance of both community-driven and industrial-driven platforms. To get started, the details of the Graphalyics documentation and its software components are described below.
In 2023, we will hold a new round of the Graphalytics competition. See the LDBC Graphalytics Benchmark presentation for an introduction to the benchmark framework and the competition’s rules.
data sets (data sets and expected results) are available on GitHub
+
+
Rules
+
+
Participation is free.
+
There are no monetary prizes.
+
Single-node and distributed implementations are allowed.
+
Partial implementations (e.g. just small to mid-sized data sets and only a few algorithms) are allowed.
+
Submissions should execute each algorithm-data set combination three times. From these, the arithmetic mean of the processing times is used for ranking.
+
The results of the competition will be published on the LDBC website in the form of leaderboards, which rank them based on performance and price-performance (adjusted for the system price).
+
There is a global leaderboard that includes all algorithms and scale factors. Additionally, there is a separate leaderboard for each scale (S, M, L, XL, 2XL+), algorithm and system category (CPU-based/GPU-based, single-node vs. distributed) to for fine-grained comparison.
+
Submissions are subject to code review and reproducibility attempts from the organizers.
Submissions using modern hardware are welcome (GPUs, FPGAs, etc.).
+
We encourage the use of cloud compute instances for running the benchmark (if possible).
+
+
Important dates
+
+
March 17: Competition is announced
+
April 25: Confirmation of intent
+
May 1: Submissions open
+
June 25: Submissions close
+
+
Data sets
+
The Graphalytics data sets are compressed using zstd. The total size of the compressed archives is approx. 350GB. When decompressed, the data sets require approximately 1.5TB of disk space.
Note that some of the Graphalytics data sets were fixed in March 2023. Prior to this, they were incorrectly packaged or had missing/incorrect reference outputs for certain algorithms. If you are uncertain whether you have the correct versions, cross-check them against these MD5 checksums: datagen-9_4-fb, datagen-sf3k-fb, datagen-sf10k-fb, graph500-27, graph500-28, graph500-29, graph500-30.
The Financial Benchmark (FinBench) project defines a graph database benchmark targeting financial scenarios such as anti-fraud and risk control. It is maintained by the LDBC FinBench Task Force. The benchmark has one workload currently, Transaction Workload, capturing OLTP scenario with complex read queries that access the …
David Püroja just completed his MSc thesis on creating a design towards SNB Interactive v2 at CWI’s Database Architectures group. David and I gave a deep-dive talk at the FOSDEM conference’s graph developer room titled The LDBC Social Network …
LDBC SNB provides a data generator, which produces synthetic datasets, mimicking a social network’s activity during a period of time. Datagen is defined by the charasteristics of realism, scalability, determinism and usability. More than two years have elapsed since my last technical update on LDBC SNB Datagen, in which I discussed the reasons for moving the code to Apache Spark from the MapReduce-based Apache Hadoop implementation and the …
The Financial Benchmark (FinBench) project aims to define a graph database evaluating benchmark and develop a data generation process and a query driver to make the evaluation of the graph database representative, reliable and comparable, especially in financial scenarios, such as anti-fraud and risk control. The FinBench is scheduled to be released in the …
LDBC’s Social Network Benchmark [4] (LDBC SNB) is an industrial and academic initiative, formed by principal actors in the field of graph-like data management. Its goal is to define a framework where different graph-based technologies can be fairly tested and compared, that can drive the identification of systems’ bottlenecks and required functionalities, and can help researchers open new frontiers in high-performance graph data …
Social Network Benchmark Suite (SNB): The SNB targets database management systems with graph-processing capabilities. It consists of two workloads, Interactive and Business Intelligence
Apache Flink [1] is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink also builds batch processing on top of the streaming engine, overlaying native iteration support, managed memory, and program optimization.
The number of datasets published in the Web of Data as part of the Linked Data Cloud is constantly increasing. The Linked Data paradigm is based on the unconstrained publication of information by different publishers, and the interlinking of web resources through “same-as” links which specify that two URIs correspond to the same real world object. In the vast number of data sources participating in the Linked Data Cloud, this information is not …
fairly frequent short updates, with no update contention worth mentioning
+
+
+
short random lookups
+
+
+
medium complex queries centered around a person’s social environment
+
+
+
The updates exist so as to invalidate strategies that rely too heavily on precomputation. The short lookups exist for the sake of realism; after all, an …
Next 31st of May the GRADES workshop will take place in Melbourne within the ACM/SIGMOD presentation. GRADES started as an initiative of the Linked Data Benchmark Council in the SIGMOD/PODS 2013 held in New York.
+
Among the papers published in this edition we have “Graphalytics: A Big Data Benchmark for Graph-Processing Platforms”, which presents a new benchmark that uses the Social Network Benchmark data generator of LDBC (that can …
SNB Interactive is the wild frontier, with very few rules. This is necessary, among other reasons, because there is no standard property graph data model, and because the contestants support a broad mix of programming models, ranging from in-process APIs to declarative query.
+
In the case of Virtuoso, we have played with SQL and SPARQL implementations. For a fixed schema and well known workload, SQL will always win. The reason for this is that …
LDBC is presenting two papers at the next edition of the ACM SIGMOD/PODS conference held in Melbourne from May 31st to June 4th, 2015. The annual ACM SIGMOD/PODS conference is a leading international forum for database researchers, practitioners, developers, and users to explore cutting-edge ideas and results, and to exchange techniques, tools and experiences.
+
On the industry track, LDBC will be presenting the Social Network Benchmark Interactive …
This post is the first in a series of blogs analyzing the LDBC Social Network Benchmark Interactive workload. This is written from the dual perspective of participating in the benchmark design and of building the OpenLink Virtuoso implementation of same.
+
With two implementations of SNB interactive at four different scales, we can take a first look at what the benchmark is really about. The hallmark of a benchmark implementation is that its …
In a previous 3-part blog series we touched upon the difficulties of executing the LDBC SNB Interactive (SNB) workload, while achieving good performance and scalability. What we didn’t discuss is why these difficulties were unique to SNB, and what aspects of the way we perform workload execution are scientific contributions - novel solutions to previously unsolved problems. This post will highlight the differences between SNB and more …
As discussed in previous posts, one of the features that makes Datagen more realistic is the fact that the activity volume of the simulated Persons is not uniform, but forms spikes. In this blog entry I want to explain more in depth how this is actually implemented inside of the generator.
+
First of all, I start with a few basics of how Datagen works internally. In Datagen, once the person graph has been created (persons and their relationships), …
This blog entry is about one of the features of DATAGEN that makes it different from other synthetic graph generators that can be found in the literature: the community structure of the graph.
+
When generating synthetic graphs, one must not only pay attention to quantitative measures such as the number of nodes and edges, but also to other more qualitative characteristics such as the degree distribution, clustering coefficient. Real graphs, and …
The Linked Data paradigm has become the prominent enabler for sharing huge volumes of data using Semantic Web technologies, and has created novel challenges for non-relational data management systems, such as RDF and graph engines. Efficient data access through queries is perhaps the most important data management task, and is enabled through query optimization techniques, which amount to the discovery of optimal or close to optimal execution …
When talking about DATAGEN and other graph generators with social network characteristics, our attention is typically borrowed by the friendship subgraph and/or its structure. However, a social graph is more than a bunch of people being connected by friendship relations, but has a lot more of other things is worth to look at. With a quick view to commercial social networks like Facebook, Twitter or Google+, one can easily identify a lot of other …
The SNB Driver part 1 post introduced, broadly, the challenges faced when developing a workload driver for the LDBC SNB benchmark. In this blog we’ll drill down deeper into the details of what it means to execute “dependent queries” during benchmark execution, and how this is handled in the driver. First of all, as many driver-specific terms will be used, below is a listing of their definitions. There is no need to read them in …
Based on the dependencies certain operations have, and on the granularity of parallelism we wish to achieve while executing them, we assign a Dependency Mode and an Execution Mode to every operation type. Using these …
The Semantic Publishing Instance Matching Benchmark (SPIMBench) is a novel benchmark for the assessment of instance matching techniques for RDF data with an associated schema. SPIMBench extends the state-of-the art instance matching benchmarks for RDF data in three main aspects: it allows for systematic scalability testing, supports a wider range of test cases including semantics-aware ones, and provides an enriched gold standard.
We are presently working on the SNB BI workload. Andrey Gubichev of TU Munchen and myself are going through the queries and are playing with two SQL based implementations, one on Virtuoso and the other on Hyper.
+
As discussed before, the BI workload has the same choke points as TPC-H as a base but pushes further in terms of graphiness and query complexity.
+
There are obvious marketing applications for a SNB-like dataset. There are also security …
LDBC’s Semantic Publishing Benchmark (SPB) measures the performance of an RDF database in a load typical for metadata-based content publishing, such as the famous BBC Dynamic Semantic Publishing scenario. Such load combines tens of updates per second (e.g. adding metadata about new articles) with even higher volume of read requests (SPARQL queries collecting recent content and data to generate web page on a specific subject, e.g. Frank …
In previous posts (Getting started with snb, DATAGEN: data generation for the Social Network Benchmark), Arnau Prat discussed the main features and characteristics of DATAGEN: realism, scalability, determinism, usability. DATAGEN is the social network data generator used by the three LDBC-SNB workloads, which produces data simulating the activity in a social network site during a period of time. In this post, we conduct a series of experiments …
In this multi-part blog we consider the challenge of running the LDBC Social Network Interactive Benchmark (LDBC SNB) workload in parallel, i.e. the design of the workload driver that will issue the queries against the System Under Test (SUT). We go through design principles that were implemented for the LDBC SNB workload generator/load tester (simply referred to as driver). Software and documentation for this driver is available here: …
LDBCSPB (Semantic Publishing Benchmark) is based on the BBC linked data platform use case. Thus the data modelling and transaction mix reflects the BBC’s actual utilization of RDF. But a benchmark is not only a condensation of current best practices. The BBC linked data platform is an Ontotext Graph DB deployment. Graph DB was formerly known as Owlim.
+
So, in SPB we wanted to address substantially more complex queries than the lookups that …
The Semantic Publishing Benchmark (SPB), developed in the context of LDBC, aims at measuring the read and write operations that can be performed in the context of a media organisation. It simulates the management and consumption of RDF metadata describing media assets and creative works. The scenario is based around a media organisation that maintains RDF descriptions of its catalogue of creative works. These descriptions use a set of ontologies …
The Linked Data Benchmark Council (LDBC) mission is to design and maintain benchmarks for graph data management systems, and establish and enforce standards in running these benchmarks, and publish and arbitrate around the official benchmark results. The council and its https://ldbcouncil.org website just launched, and in its first 1.5 year of existence, most effort at LDBC has gone into investigating the needs of the field through interaction …
The Linked Data Benchmark Council (LDBC) is reaching a milestone today, June 23 2014, in announcing that two of the benchmarks that it has been developing since 1.5 years have now reached the status of Public Draft. This concerns the Semantic Publishing Benchmark (SPB) and the interactive workload of the Social Network Benchmark (SNB). In case of LDBC, the release is staged: now the benchmark software just runs read-only queries. This will be …
Social Network interaction is amongst the most natural and widely spread activities in the internet society, and it has turned out to be a very useful way for people to socialise at different levels (friendship, professional, hobby, etc.). As such, Social Networks are well understood from the point of view of the data involved and the interaction required by their actors. Thus, the concepts of friends of friends, or retweet are well established …
It is with great pleasure that we announce the new LDBC organisation site at www.ldbcouncil.org. The LDBC started as a European Community FP7 funded project with the objective to create, foster and become an industry reference for benchmarking RDF and Graph technologies. A period of more than one and a half years has led us to the creation of the first two workloads, the Semantic Publishing Benchmark and the Social Network Benchmark in its …
Following the 1st International workshop on Benchmarking RDF Systems (BeRSys 2013) the aim of the BeRSys 2014 workshop is to provide a discussion forum where researchers and industrials can meet to discuss topics related to the performance of RDF systems. BeRSys 2014 is the only workshop dedicated to benchmarking different aspects of RDF engines - in the line of TPCTC series of workshops.The focus of the workshop is to expose and initiate …
As explained in a previous post, the LDBC Social Network Benchmark (LDBC-SNB) has the objective to provide a realistic yet challenging workload, consisting of a social network and a set of queries. Both have to be realistic, easy to understand and easy to generate. This post has the objective to discuss the main features of DATAGEN, the social network data generator provided by LDBC-SNB, which is an evolution of S3G2 [1].
In a previous blog post titled “Is SNB like Facebook’s LinkBench?”, Peter Boncz discusses the design philosophy that shapes SNB and how it compares to other existing benchmarks such as LinkBench. In this post, I will briefly introduce the essential parts forming SNB, which are DATAGEN, the LDBC execution driver and the workloads.
+
DATAGEN
+
DATAGEN is the data generator used by all the workloads of SNB. Here we introduced the …
The LDBC Social Network Benchmark (SNB) is composed of three distinct workloads, interactive, business intelligence and graph analytics. This post introduces the interactive workload.
+
The benchmark measures the speed of queries of medium complexity against a social network being constantly updated. The queries are scoped to a user’s social environment and potentially access data associated with the friends or a user and their friends.
In this post, I will discuss in some detail the rationale and goals of the design of the Social Network Benchmark (SNB) and explain how it relates to real …
Synopsis: Now is the time to finalize the interactive part of the Social Network Benchmark (SNB). The benchmark must be both credible in a real social network setting and pose new challenges. There are many hard queries but not enough representation for what online systems in fact do. So, the workload mix must strike a balance between the practice and presenting new challenges.
+
It is about to be showtime for LDBC. The initial installment of the …
In previous posts (this and this) we briefly introduced the design goals and philosophy behind DATAGEN, the data generator used in LDBC-SNB. In this post, I will explain how to use DATAGEN to generate the necessary datatsets to run LDBC-SNB. Of course, as DATAGEN is continuously under development, the instructions given in this tutorial might change in the future.
+
Getting and Configuring Hadoop
+
DATAGEN runs on top of hadoop 1.2.1 to be scale. …
Note: consider this post as a continuation of the “Making it interactive” post by Orri Erling.
+
I have now completed the Virtuoso TPC-H work, including scale out. Optimization possibilities extend to infinity but the present level is good enough. TPC-H is the classic of all analytics benchmarks and is difficult enough, I have extensive commentary on this on my blog (In Hoc Signo Vinces series), including experimental results. This is, …
During the past six months we (the OWLIM Team at Ontotext) have integrated the LDBC Semantic Publishing Benchmark (LDBC-SPB) as a part of our development and release process.
+
First thing we’ve started using the LDBC-SPB for is to monitor the performance of our RDF Store when a new release is about to come out.
+
Initially we’ve decided to fix some of the benchmark parameters :
+
+
the dataset size - 50 million triples (LDBC-SPB50) * benchmark warmup …
The LDBC SNB Business Intelligence workload is focusing on aggregation- and join-heavy complex queries touching a large portion of the graph with microbatches of insert/delete operations. Its data sets are available in Cloudflare R2.
The workload produces scoring metrics for performance (power and throughput scores) at the given scale and price/performance metrics. The full disclosure reports (FDR) further break down the composition of the metric into its constituent parts, e.g. single query execution times.
+
Note that the system cost is the sum of the license, hardware, and maintenance costs, where maintenance means 24/7 support with a response time of less than 4 hours.
The LDBC SNB Interactive workload captures transactional graph processing scenario with complex read queries that access the neighbourhood of a given node in the graph and update operations that continuously insert new data in the graph. Its data sets are available in the CWI/SURF data repository.
+
The workload produces the throughput metric to characterize the performance at the given scale. The full disclosure reports (FDR) further detail the performance of the system under test by listing the data loading time and single query execution times.
+
Note that the system cost is the sum of the license, hardware, and maintenance costs, where maintenance means 24/7 support with a response time of less than 4 hours.
+
+
+
+
Audited results using a declarative query language
The Social Network Benchmark (SNB) suite defines graph workloads targeting database management systems and is maintained by the LDBC SNB Task Force.
+
The benchmark suite consists of two distinct workloads on a common dataset:
+
+
The Business Intelligence workload is focusing on aggregation- and join-heavy complex queries touching a large portion of the graph with microbatches of insert/delete operations. Its data sets are available in Cloudflare R2 and in the SURF/CWI repository.
+
The Interactive workload captures transactional graph processing scenario with complex read queries that access the neighbourhood of a given node in the graph and update operations that continuously insert new data in the graph. Its data sets are available in the CWI/SURF data repository.
For auditing requests, please reach out at info@ldbcouncil.org. Audits can only be commissioned by LDBC member companies by contracting any of the LDBC-certified auditors. Note that there is a 2,000 GBP auditing fee to be paid for the LDBC for non-sponsor company members. Sponsor companies are exempt from this.
+
+
+
+
+
\ No newline at end of file
diff --git a/benchmarks/snb/ldbc-snb-work-charter.pdf b/benchmarks/snb/ldbc-snb-work-charter.pdf
new file mode 100644
index 00000000..9743055c
Binary files /dev/null and b/benchmarks/snb/ldbc-snb-work-charter.pdf differ
diff --git a/benchmarks/snb/retrospective-report-neo4j.pdf b/benchmarks/snb/retrospective-report-neo4j.pdf
new file mode 100644
index 00000000..e99d9111
Binary files /dev/null and b/benchmarks/snb/retrospective-report-neo4j.pdf differ
diff --git a/benchmarks/snb/retrospective-report-oracle.pdf b/benchmarks/snb/retrospective-report-oracle.pdf
new file mode 100644
index 00000000..fddf907e
Binary files /dev/null and b/benchmarks/snb/retrospective-report-oracle.pdf differ
diff --git a/benchmarks/snb/retrospective-report-tigergraph.pdf b/benchmarks/snb/retrospective-report-tigergraph.pdf
new file mode 100644
index 00000000..8468fd07
Binary files /dev/null and b/benchmarks/snb/retrospective-report-tigergraph.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-1G-Virtuoso-09062015.pdf b/benchmarks/spb/LDBC-SPB-1G-Virtuoso-09062015.pdf
new file mode 100644
index 00000000..a46849c9
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-1G-Virtuoso-09062015.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-1G-Virtuoso-EC2-10062015.pdf b/benchmarks/spb/LDBC-SPB-1G-Virtuoso-EC2-10062015.pdf
new file mode 100644
index 00000000..def15fc9
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-1G-Virtuoso-EC2-10062015.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-256M-GraphDB-10062015.pdf b/benchmarks/spb/LDBC-SPB-256M-GraphDB-10062015.pdf
new file mode 100644
index 00000000..1b30a9ca
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-256M-GraphDB-10062015.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-256M-Virtuoso-09062015.pdf b/benchmarks/spb/LDBC-SPB-256M-Virtuoso-09062015.pdf
new file mode 100644
index 00000000..672c9cf7
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-256M-Virtuoso-09062015.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-256M-Virtuoso-EC2-10062015-v2.pdf b/benchmarks/spb/LDBC-SPB-256M-Virtuoso-EC2-10062015-v2.pdf
new file mode 100644
index 00000000..34ea9e86
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-256M-Virtuoso-EC2-10062015-v2.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-64M-GraphDB-10062015.pdf b/benchmarks/spb/LDBC-SPB-64M-GraphDB-10062015.pdf
new file mode 100644
index 00000000..b0d71274
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-64M-GraphDB-10062015.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-64M-Virtuoso-09062015.pdf b/benchmarks/spb/LDBC-SPB-64M-Virtuoso-09062015.pdf
new file mode 100644
index 00000000..e5d86326
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-64M-Virtuoso-09062015.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-SF3-GraphDB-cluster-20230129-executive-summary.pdf b/benchmarks/spb/LDBC-SPB-SF3-GraphDB-cluster-20230129-executive-summary.pdf
new file mode 100644
index 00000000..58bf4f57
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-SF3-GraphDB-cluster-20230129-executive-summary.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-SF3-GraphDB-cluster-20230129.pdf b/benchmarks/spb/LDBC-SPB-SF3-GraphDB-cluster-20230129.pdf
new file mode 100644
index 00000000..8357aa61
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-SF3-GraphDB-cluster-20230129.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-SF3-GraphDB-single-machine-20230129-executive-summary.pdf b/benchmarks/spb/LDBC-SPB-SF3-GraphDB-single-machine-20230129-executive-summary.pdf
new file mode 100644
index 00000000..74243ed5
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-SF3-GraphDB-single-machine-20230129-executive-summary.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-SF3-GraphDB-single-machine-20230129.pdf b/benchmarks/spb/LDBC-SPB-SF3-GraphDB-single-machine-20230129.pdf
new file mode 100644
index 00000000..0c4ea85d
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-SF3-GraphDB-single-machine-20230129.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-SF5-GraphDB-cluster-20230129-executive-summary.pdf b/benchmarks/spb/LDBC-SPB-SF5-GraphDB-cluster-20230129-executive-summary.pdf
new file mode 100644
index 00000000..74cbcea1
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-SF5-GraphDB-cluster-20230129-executive-summary.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-SF5-GraphDB-cluster-20230129.pdf b/benchmarks/spb/LDBC-SPB-SF5-GraphDB-cluster-20230129.pdf
new file mode 100644
index 00000000..e17d3bc4
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-SF5-GraphDB-cluster-20230129.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-SF5-GraphDB-single-machine-20230129-executive-summary.pdf b/benchmarks/spb/LDBC-SPB-SF5-GraphDB-single-machine-20230129-executive-summary.pdf
new file mode 100644
index 00000000..e1e82e68
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-SF5-GraphDB-single-machine-20230129-executive-summary.pdf differ
diff --git a/benchmarks/spb/LDBC-SPB-SF5-GraphDB-single-machine-20230129.pdf b/benchmarks/spb/LDBC-SPB-SF5-GraphDB-single-machine-20230129.pdf
new file mode 100644
index 00000000..d22954fb
Binary files /dev/null and b/benchmarks/spb/LDBC-SPB-SF5-GraphDB-single-machine-20230129.pdf differ
diff --git a/benchmarks/spb/LDBC_SPB20_20150426_SF1_GraphDB-EE-6.2b.pdf b/benchmarks/spb/LDBC_SPB20_20150426_SF1_GraphDB-EE-6.2b.pdf
new file mode 100644
index 00000000..dc61ff00
Binary files /dev/null and b/benchmarks/spb/LDBC_SPB20_20150426_SF1_GraphDB-EE-6.2b.pdf differ
diff --git a/benchmarks/spb/LDBC_SPB20_20150426_SF3_GraphDB-EE-6.2b.pdf b/benchmarks/spb/LDBC_SPB20_20150426_SF3_GraphDB-EE-6.2b.pdf
new file mode 100644
index 00000000..f4865212
Binary files /dev/null and b/benchmarks/spb/LDBC_SPB20_20150426_SF3_GraphDB-EE-6.2b.pdf differ
diff --git a/benchmarks/spb/index.html b/benchmarks/spb/index.html
new file mode 100644
index 00000000..bbc9ec0b
--- /dev/null
+++ b/benchmarks/spb/index.html
@@ -0,0 +1,594 @@
+
+
+
+
+ LDBC Semantic Publishing Benchmark (LDBC SPB)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
LDBC Semantic Publishing Benchmark (LDBC-SPB)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The Semantic Publishing Benchmark (SPB) is an LDBC benchmark for testing the performance of RDF engines inspired by the Media/Publishing industry. In particular, LDBC worked with British Broadcasting Corporation BBC to define this benchmark, for which BBC donated workloads, ontologies and data. The publishing industry is an area where significant adoption of RDF is taking place.
+
There have been many academic benchmarks for RDF but none of these are truly industrial-grade. The SPB combines a set of complex queries under inference with continuous updates and special failover tests for systems implementing replication.
+
SPB performance is measured by producing a workload of CRUD (Create, Read, Update, Delete) operations which are executed simultaneously. The benchmark offers a data generator that uses real reference data to produce datasets of various sizes and tests the scalability aspect of RDF systems. The benchmark workload consists of (a) editorial operations that add new data, alter or delete existing (b) aggregation operations that retrieve content according to various criteria. The benchmark also tests conformance for various rules inside the OWL2-RL rule-set.
+
The SPB specification contains the description of the benchmark and the data generator and all information about its software components can be found on the SPB developer page.
If you are interested in joining our benchmark task forces, please reach out at info@ldbcouncil.org.
+
Licensing
+
Our benchmarks are licensed under the Apache Software License, Version 2.0 (license file, notice file).
+
Contributor License Agreement
+
To contribute to the LDBC repositories, we ask you to sign a CLA or become an LDBC member. These options are available for both individuals and organizations.
+
+
+
+
+
\ No newline at end of file
diff --git a/developer/finbench/index.html b/developer/finbench/index.html
new file mode 100644
index 00000000..588fa8b8
--- /dev/null
+++ b/developer/finbench/index.html
@@ -0,0 +1,10 @@
+
+
+
+ https://ldbcouncil.org/benchmarks/finbench/
+
+
+
+
+
+
diff --git a/developer/snb/index.html b/developer/snb/index.html
new file mode 100644
index 00000000..722eb105
--- /dev/null
+++ b/developer/snb/index.html
@@ -0,0 +1,10 @@
+
+
+
+ https://ldbcouncil.org/benchmarks/snb/
+
+
+
+
+
+
diff --git a/developer/spb/index.html b/developer/spb/index.html
new file mode 100644
index 00000000..b1746b28
--- /dev/null
+++ b/developer/spb/index.html
@@ -0,0 +1,10 @@
+
+
+
+ https://ldbcouncil.org/benchmarks/spb/
+
+
+
+
+
+
diff --git a/docs/LDBC.Articles.of.Association.ADOPTED.2021-01-14.pdf b/docs/LDBC.Articles.of.Association.ADOPTED.2021-01-14.pdf
new file mode 100644
index 00000000..cfbc0f83
Binary files /dev/null and b/docs/LDBC.Articles.of.Association.ADOPTED.2021-01-14.pdf differ
diff --git a/docs/LDBC.Articles.of.Association.ADOPTED.2023-03-30.pdf b/docs/LDBC.Articles.of.Association.ADOPTED.2023-03-30.pdf
new file mode 100644
index 00000000..284c51d2
Binary files /dev/null and b/docs/LDBC.Articles.of.Association.ADOPTED.2023-03-30.pdf differ
diff --git a/docs/LDBC.Byelaws.1.1.ADOPTED.2017-03-02.pdf b/docs/LDBC.Byelaws.1.1.ADOPTED.2017-03-02.pdf
new file mode 100644
index 00000000..da96ef26
Binary files /dev/null and b/docs/LDBC.Byelaws.1.1.ADOPTED.2017-03-02.pdf differ
diff --git a/docs/LDBC.Byelaws.1.2.ADOPTED.2020-07-28.pdf b/docs/LDBC.Byelaws.1.2.ADOPTED.2020-07-28.pdf
new file mode 100644
index 00000000..2416cbb1
Binary files /dev/null and b/docs/LDBC.Byelaws.1.2.ADOPTED.2020-07-28.pdf differ
diff --git a/docs/LDBC.Byelaws.1.3.ADOPTED.2021-01-14.pdf b/docs/LDBC.Byelaws.1.3.ADOPTED.2021-01-14.pdf
new file mode 100644
index 00000000..9800be11
Binary files /dev/null and b/docs/LDBC.Byelaws.1.3.ADOPTED.2021-01-14.pdf differ
diff --git a/docs/LDBC.Byelaws.1.4.ADOPTED.2023-05-02.pdf b/docs/LDBC.Byelaws.1.4.ADOPTED.2023-05-02.pdf
new file mode 100644
index 00000000..b4e7f658
Binary files /dev/null and b/docs/LDBC.Byelaws.1.4.ADOPTED.2023-05-02.pdf differ
diff --git a/docs/LDBC.Individual.Contributor.License.Agreement.Form-2020-10-23.pdf b/docs/LDBC.Individual.Contributor.License.Agreement.Form-2020-10-23.pdf
new file mode 100644
index 00000000..7f84e331
Binary files /dev/null and b/docs/LDBC.Individual.Contributor.License.Agreement.Form-2020-10-23.pdf differ
diff --git a/docs/LDBC.Membership--Joining.and.Renewing.(August.2023).pdf b/docs/LDBC.Membership--Joining.and.Renewing.(August.2023).pdf
new file mode 100644
index 00000000..b826fd39
Binary files /dev/null and b/docs/LDBC.Membership--Joining.and.Renewing.(August.2023).pdf differ
diff --git a/docs/LDBC.Membership.-.Joining.and.Renewing.2022.pdf b/docs/LDBC.Membership.-.Joining.and.Renewing.2022.pdf
new file mode 100644
index 00000000..7c7b9c28
Binary files /dev/null and b/docs/LDBC.Membership.-.Joining.and.Renewing.2022.pdf differ
diff --git a/docs/LDBC.Membership.Application.and.Renewal.Form.2021-01-14.pdf b/docs/LDBC.Membership.Application.and.Renewal.Form.2021-01-14.pdf
new file mode 100644
index 00000000..aea7ecf4
Binary files /dev/null and b/docs/LDBC.Membership.Application.and.Renewal.Form.2021-01-14.pdf differ
diff --git a/docs/LDBC.Membership.Application_Renewal.Form.and.incorporated.Membership.Agreement.(August.2023).pdf b/docs/LDBC.Membership.Application_Renewal.Form.and.incorporated.Membership.Agreement.(August.2023).pdf
new file mode 100644
index 00000000..d5dd797c
Binary files /dev/null and b/docs/LDBC.Membership.Application_Renewal.Form.and.incorporated.Membership.Agreement.(August.2023).pdf differ
diff --git a/docs/LDBC.Organization.Contributor.License.Agreement.Form-2020-10-23.pdf b/docs/LDBC.Organization.Contributor.License.Agreement.Form-2020-10-23.pdf
new file mode 100644
index 00000000..253b6776
Binary files /dev/null and b/docs/LDBC.Organization.Contributor.License.Agreement.Form-2020-10-23.pdf differ
diff --git a/docs/ldbc-snb-auditing-process.pdf b/docs/ldbc-snb-auditing-process.pdf
new file mode 100644
index 00000000..38464384
Binary files /dev/null and b/docs/ldbc-snb-auditing-process.pdf differ
diff --git a/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-01--SQL_PGQ-data-model-and-graph-schema.DOI.10.54285_ldbc.QZSK3559.pdf b/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-01--SQL_PGQ-data-model-and-graph-schema.DOI.10.54285_ldbc.QZSK3559.pdf
new file mode 100644
index 00000000..324fec17
Binary files /dev/null and b/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-01--SQL_PGQ-data-model-and-graph-schema.DOI.10.54285_ldbc.QZSK3559.pdf differ
diff --git a/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-02--Introduction-to-GQL-Schema-design.DOI.10.54285_ldbc.EPWQ6741.pdf b/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-02--Introduction-to-GQL-Schema-design.DOI.10.54285_ldbc.EPWQ6741.pdf
new file mode 100644
index 00000000..899cbad5
Binary files /dev/null and b/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-02--Introduction-to-GQL-Schema-design.DOI.10.54285_ldbc.EPWQ6741.pdf differ
diff --git a/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-03--Cypher-schema-constraints-proposal.DOI.10.54285_ldbc.KKHM1756.pdf b/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-03--Cypher-schema-constraints-proposal.DOI.10.54285_ldbc.KKHM1756.pdf
new file mode 100644
index 00000000..e5a385e4
Binary files /dev/null and b/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-03--Cypher-schema-constraints-proposal.DOI.10.54285_ldbc.KKHM1756.pdf differ
diff --git a/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-04--LDBC-Property-Graph-Schema-contributions-to-WG3.DOI.10.54285_ldbc.OFJF3566.pdf b/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-04--LDBC-Property-Graph-Schema-contributions-to-WG3.DOI.10.54285_ldbc.OFJF3566.pdf
new file mode 100644
index 00000000..14cb184a
Binary files /dev/null and b/docs/papers/LDBC-Open-Access-External-Paper-OAEP-2023-04--LDBC-Property-Graph-Schema-contributions-to-WG3.DOI.10.54285_ldbc.OFJF3566.pdf differ
diff --git a/docs/papers/LDBC-Technical-Report-TR-2021-01--Property-graphs-and-paths-in-GQL-Mathematical-definitions.DOI.10.54285_ldbc.TZJP7279.pdf b/docs/papers/LDBC-Technical-Report-TR-2021-01--Property-graphs-and-paths-in-GQL-Mathematical-definitions.DOI.10.54285_ldbc.TZJP7279.pdf
new file mode 100644
index 00000000..0b499494
Binary files /dev/null and b/docs/papers/LDBC-Technical-Report-TR-2021-01--Property-graphs-and-paths-in-GQL-Mathematical-definitions.DOI.10.54285_ldbc.TZJP7279.pdf differ
diff --git a/docs/papers/LDBC-Work-Charter-WC-2022-02--LDBC-Extended-Graph-Schema--LEX--Work-Charter.DOI.10.54285_ldbc.VSBC2149.pdf b/docs/papers/LDBC-Work-Charter-WC-2022-02--LDBC-Extended-Graph-Schema--LEX--Work-Charter.DOI.10.54285_ldbc.VSBC2149.pdf
new file mode 100644
index 00000000..eeb23d0d
Binary files /dev/null and b/docs/papers/LDBC-Work-Charter-WC-2022-02--LDBC-Extended-Graph-Schema--LEX--Work-Charter.DOI.10.54285_ldbc.VSBC2149.pdf differ
diff --git a/docs/papers/datagen-deletes-grades-nda-2020.pdf b/docs/papers/datagen-deletes-grades-nda-2020.pdf
new file mode 100644
index 00000000..1900a604
Binary files /dev/null and b/docs/papers/datagen-deletes-grades-nda-2020.pdf differ
diff --git a/docs/papers/ldbc-acid-tpctc2020.pdf b/docs/papers/ldbc-acid-tpctc2020.pdf
new file mode 100644
index 00000000..19044a2b
Binary files /dev/null and b/docs/papers/ldbc-acid-tpctc2020.pdf differ
diff --git a/docs/papers/ldbc-organization-tpctc2023-preprint.pdf b/docs/papers/ldbc-organization-tpctc2023-preprint.pdf
new file mode 100644
index 00000000..f4e61661
Binary files /dev/null and b/docs/papers/ldbc-organization-tpctc2023-preprint.pdf differ
diff --git a/docs/papers/ldbc-snb-bi-grades-nda-2018.pdf b/docs/papers/ldbc-snb-bi-grades-nda-2018.pdf
new file mode 100644
index 00000000..272aad49
Binary files /dev/null and b/docs/papers/ldbc-snb-bi-grades-nda-2018.pdf differ
diff --git a/docs/papers/ldbc-snb-bi-vldb-2022.pdf b/docs/papers/ldbc-snb-bi-vldb-2022.pdf
new file mode 100644
index 00000000..cd354e6c
Binary files /dev/null and b/docs/papers/ldbc-snb-bi-vldb-2022.pdf differ
diff --git a/docs/papers/ldbc-snb-bi-vldb-2023.pdf b/docs/papers/ldbc-snb-bi-vldb-2023.pdf
new file mode 100644
index 00000000..cd354e6c
Binary files /dev/null and b/docs/papers/ldbc-snb-bi-vldb-2023.pdf differ
diff --git a/docs/papers/ldbc-snb-interactive-sigmod-2015.pdf b/docs/papers/ldbc-snb-interactive-sigmod-2015.pdf
new file mode 100644
index 00000000..bf76ed84
Binary files /dev/null and b/docs/papers/ldbc-snb-interactive-sigmod-2015.pdf differ
diff --git a/docs/papers/ldbc-snb-interactive-v2-tpctc2023-preprint.pdf b/docs/papers/ldbc-snb-interactive-v2-tpctc2023-preprint.pdf
new file mode 100644
index 00000000..533f885d
Binary files /dev/null and b/docs/papers/ldbc-snb-interactive-v2-tpctc2023-preprint.pdf differ
diff --git a/docs/papers/msc-thesis-david-puroja-snb-interactive-v2-2023.pdf b/docs/papers/msc-thesis-david-puroja-snb-interactive-v2-2023.pdf
new file mode 100644
index 00000000..ead0d1e7
Binary files /dev/null and b/docs/papers/msc-thesis-david-puroja-snb-interactive-v2-2023.pdf differ
diff --git a/docs/papers/parameter-curation-tpctc2014.pdf b/docs/papers/parameter-curation-tpctc2014.pdf
new file mode 100644
index 00000000..6d58915f
Binary files /dev/null and b/docs/papers/parameter-curation-tpctc2014.pdf differ
diff --git a/docs/papers/s3g2-scalable-data-generator-tpctc2012.pdf b/docs/papers/s3g2-scalable-data-generator-tpctc2012.pdf
new file mode 100644
index 00000000..e9b1578a
Binary files /dev/null and b/docs/papers/s3g2-scalable-data-generator-tpctc2012.pdf differ
diff --git a/docs/papers/tpc-h-analyzed-choke-points-tpctc2013.pdf b/docs/papers/tpc-h-analyzed-choke-points-tpctc2013.pdf
new file mode 100644
index 00000000..69eb527e
Binary files /dev/null and b/docs/papers/tpc-h-analyzed-choke-points-tpctc2013.pdf differ
diff --git a/docs/papers/w16045-prepub-LDBC-Technical-Report-TR-2021-01--Property-graphs-and-paths-in-GQL-Mathematical-definitions.pdf b/docs/papers/w16045-prepub-LDBC-Technical-Report-TR-2021-01--Property-graphs-and-paths-in-GQL-Mathematical-definitions.pdf
new file mode 100644
index 00000000..77854124
Binary files /dev/null and b/docs/papers/w16045-prepub-LDBC-Technical-Report-TR-2021-01--Property-graphs-and-paths-in-GQL-Mathematical-definitions.pdf differ
diff --git a/docs/presentations/graph-databases-2021-11.pdf b/docs/presentations/graph-databases-2021-11.pdf
new file mode 100644
index 00000000..eac7b460
Binary files /dev/null and b/docs/presentations/graph-databases-2021-11.pdf differ
diff --git a/docs/presentations/ldbc-snb-2021-12.pdf b/docs/presentations/ldbc-snb-2021-12.pdf
new file mode 100644
index 00000000..9cbc43e5
Binary files /dev/null and b/docs/presentations/ldbc-snb-2021-12.pdf differ
diff --git a/docs/presentations/ldbc-snb-2022-11.pdf b/docs/presentations/ldbc-snb-2022-11.pdf
new file mode 100644
index 00000000..9aec9e54
Binary files /dev/null and b/docs/presentations/ldbc-snb-2022-11.pdf differ
diff --git a/docs/presentations/tpctc-2023-ldbc-linked-data-benchmark-council-organization.pdf b/docs/presentations/tpctc-2023-ldbc-linked-data-benchmark-council-organization.pdf
new file mode 100644
index 00000000..82daeb91
Binary files /dev/null and b/docs/presentations/tpctc-2023-ldbc-linked-data-benchmark-council-organization.pdf differ
diff --git a/docs/presentations/tpctc-2023-ldbc-snb-interactive-v2.pdf b/docs/presentations/tpctc-2023-ldbc-snb-interactive-v2.pdf
new file mode 100644
index 00000000..7a1d7d5e
Binary files /dev/null and b/docs/presentations/tpctc-2023-ldbc-snb-interactive-v2.pdf differ
diff --git a/docs/presentations/vldb-2023-ldbc-snb-bi-slides-szarnyasg.pdf b/docs/presentations/vldb-2023-ldbc-snb-bi-slides-szarnyasg.pdf
new file mode 100644
index 00000000..90f1df3b
Binary files /dev/null and b/docs/presentations/vldb-2023-ldbc-snb-bi-slides-szarnyasg.pdf differ
diff --git a/event/eighth-tuc-meeting/index.html b/event/eighth-tuc-meeting/index.html
new file mode 100644
index 00000000..d8936196
--- /dev/null
+++ b/event/eighth-tuc-meeting/index.html
@@ -0,0 +1,575 @@
+
+
+
+
+ Eighth TUC Meeting
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Eighth TUC Meeting
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
by Damaris Coll / on 22 Jun 2016
+
+
+
+
+ Location: Oracle Conference Center in Redwood Shores (CA) US
+
+
+
+ Event dates: 22 Jun 2016 14:45 -- 23 Jun 2016 14:45 (local timezone)
+
+
+
+
The LDBC consortium is pleased to announce its Eighth Technical User Community (TUC) meeting.
+
This will be a two-day event/eighth-tuc-meeting/attachments at Oracle Conference Center in Redwood Shores facility on Wednesday and Thursday June 22-23, 2016.
+
This will be the second TUC meeting after the finalisation of the LDBC FP7 EC funded project. The event/eighth-tuc-meeting/attachments will basically set the following aspects:
+
+
Two day event/eighth-tuc-meeting/attachments with one day devoted to User’s experiences and one day devoted to benchmarking experiences.
+
Presentation of the benchmarking results for the different benchmarks.
+
Interaction with the new LDBC Board of Directors and the LDBC organisation officials.
+
+
We welcome all users of RDF and Graph technologies to attend. If you are interested, please, contact Damaris Coll (UPC) at damaris@ac.upc.edu; in order to notify Oracle security in advance, registration requests need to be in by June 12.
+
In the agenda, there will be talks given by LDBC members and LDBC activities, but there will also be room for a number of short 20-minute talks by other participants. We are specifically interested in learning about new challenges in graph data management (where benchmarking would become useful) and on hearing about actual user stories and scenarios that could inspire benchmarks. Further, talks that provide feedback on existing benchmark (proposals) are very relevant. But nothing is excluded a priori if it is graph data management related. Talk proposals can be forwarded to Damaris as well and will be handled by Peter Boncz and Larri.
+
Further, we call on you if you or your colleagues would happen to have contacts with companies that deal with graph data management scenarios to also attend and possibly present. LDBC is always looking to expand its circle of participants in TUCs meeting, its graph technology users contacts but also event/eighth-tuc-meeting/attachmentsually its membership base.
+
In this page, you’ll find information about the following items:
[Southbound] - Take Highway 101 South (toward San Jose) to the Ralston Ave./Marine World Parkway exit. Take Marine World Parkway east which will loop you back over the freeway. Make a left at the first light onto Oracle Parkway. 350 Oracle Parkway will be on the right.
+
[Northbound] - Take Highway 101 North (toward San Francisco) to the Ralston Ave./Marine World Parkway exit. Take the first exit ramp onto Marine World Parkway. Make a left at the first light onto Oracle Parkway. 350 Oracle Parkway will be on the right.
+
+
Parking
+
The Conference Center has a designated parking lot located directly across from the building. If the lot is filled there is also additional parking in any of the parking garages located near by. No parking permits are needed.
+
Public transport
+
Take the Caltrain to either San Carlos or Hillsdale and take the free Oracle shuttle from there. Get off the Oracle shuttle at 100 Oracle Parkway (second stop) and walk 5 minutes to get to the Conference Center.
You can also take the Caltrain to Belmont and walk 23 min, instead of taking the Oracle shuttle.
+
Alternatively, SamTrans (San Mateo County’s Transit Agency) provides public bus service between the Millbrae BART station and Palo Alto with three stops on Oracle Parkway - one of which is directly in front of the Oracle Conference Center.
LDBC Technical User Community meetings serve to (1) learn about progress in the LDBC task forces on graph benchmark development, (2) to give feedback on these, and (3) hear about user experiences with graph data management technologies or (4) learn about new graph technologies from researchers or industry – LDBC counts Oracle, IBM, Intel, Neo4j and Huawei among its members.
+
This TUC meeting will be a one-day event preceding the SIGMOD/PODS 2018 conference in Houston, Texas (not too far away, the whole next week). Note also that at SIGMOD/PODS in Houston on Sunday 10, there is a research workshop on graph data management technology called GRADES-NDA 2018 as well, so you might combine travel.
+
We welcome all users of RDF and Graph technologies to attend. If you are interested to attend the event, please, contact Damaris Coll (UPC) at damaris@ac.upc.edu to register.
+
=> registration is free, but required <=
+
In the agenda, there will be talks given by LDBC members and LDBC activities, but there will also be room for a number of short 20-minute talks by other participants. We are specifically interested in learning about new challenges in graph data management (where benchmarking would become useful) and on hearing about actual user stories and scenarios that could inspire benchmarks. Further, talks that provide feedback on existing benchmark (proposals) are very relevant. But nothing is excluded a priori if it is related to graph data management. Talk proposals are handled by Peter Boncz (boncz@cwi.nl) and Larri (larri@ac.upc.edu). Local organizer is Juan Sequeda (juanfederico@gmail.com).
+
Further, we call on you if you or your colleagues would happen to have contacts with companies that deal with graph data management scenarios to also attend and possibly present. LDBC is always looking to expand its circle of participants in TUCs meeting, its graph technology users contacts but also eventually its membership base.
+
Agenda
+
In the TUC meeting there will be:
+
+
updates on progress with LDBC benchmarks, specifically the Social Network Benchmark (SNB) and its interactive, business analytics and graphalytics workloads.
+
talks by data management practitioners highlighting graph data management challenges and products
+
+
The meeting will start on Friday morning, with a program from 10:30-17:00:
+
+
+
10:30-10:35 Peter Boncz (CWI) - introduction to the LDBC TUC meeting
The GDC building has a North and a South building. GDC 6.302 is in the North building. When you enter the main entrance, the North building is on the left and it is served by a pair of elevators. You can take or the elevator to the 6th floor. Exit the elevator on the 6th floor. Turn left, right, left.
+
From Austin to SIGMOD/PODS (Houston) on Saturday June 9
+
Many of the attendees will be going to SIGMOD/PODS which will be held in Houston.
+ Location: Philadelphia, Marriott Philadelphia Downtown
+
+
+
+ Event dates: 17 Jun 2022 09:20 -- 18 Jun 2022 15:30 (local timezone)
+
+
+
+
Organizers: Gábor Szárnyas, Jack Waudby, Peter Boncz, Alastair Green
+
LDBC is hosting a two-day hybrid workshop, co-located with SIGMOD 2022 on June 17-18 (Friday-Saturday).
+
The program consists of 10-15 minute talks followed by a Q&A session. The talks will be recorded and made available online.
+The tenative program is the following. All times are in EDT.
+
We will have a social event on Friday at 17:30 at El Vez (Google Maps).
Athens, Greece’s capital city, is easily accessible by air. Travelers on flights to Athens will land at Athens Eleftherios Venizelos International Airport.
+
To arrive in the city center, you can take the metro from the airport (Line #3) and stop at either stop Evangelismos (ΕΥΑΓΓΕΛΙΣΜΟΣ) or at Syntagma (ΣΥΝΤΑΓΜΑ) stations. You can also take express Bus X95 and stop again at either Evangelismos (ΕΥΑΓΓΕΛΙΣΜΟΣ) or at Syntagma (ΣΥΝΤΑΓΜΑ) stations (the latter is the terminus for the bus).
+
You can also take a taxi from the airport that runs on a fixed price for the city center (45 euros). More information on how to move around in Athens from the airport can be found here: http://www.aia.gr/traveler/
+
+ Event dates: 19 Nov 2012 09:00 -- 20 Nov 2012 17:00 (local timezone)
+
+
+
+
The LDBC consortium are pleased to announce the first Technical User Community (TUC) meeting. This will be a two day event in Barcelona on the 19/20th November 2012.
+
So far more than six commercial consumers of graph/RDF database technology have expressed an interest in attending the event and more are welcome. The proposed format of the event wil include:
+
+
Introduction by the coordinator and technical director explaining the objectives of the LDBC project
+
Invitation to users to explain their use-cases and describe the limitations they have found in current technology
+
Brain-storming session for identifying trends and mapping out strategies to tackle existing choke-points
+
+
The exact agenda will be published here as things get finalised before the event.
+
All users of RDF and graph databases are welcome to attend. If you are interested, please contact: ldbc AT ac DOT upc DOT edu
We will start at 9:00 on Monday for a full day, followed by a half a day on Tuesday to allow attendees to travel home on the evening of the 20th.
+
Day 1
+
09:00 Welcome (Location: Aula Master)
+09:30 Project overview (Emphasis on task forces?) + Questionnaire results?
+10:30 Coffee break
+11:00 User talks (To gather information for use cases?)
+
13:00 Lunch
+
14:00 User talks (cont.)
+15:00 Use case discussions (based on questionnaire results + consortium proposal + user talks).
+16:00 Task force proposals (consortium)
+17:00 Finish first day
+
20:00 Social dinner
+
Day 2
+
10:00 Task force discussion (consortium + TUC)
+11:00 Coffe break
+11:30 Task force discussion (consortium + TUC)
+12:30 Summaries (Task forces, use cases, …) and actions
To reach the campus, there are several options, including Taxi, Metro and Bus.
+
+
Finding UPC
+
+
Finding the meeting room
+
Getting there
+
Flying: Barcelona airport is situated 12 km from the city. There are several ways of getting from the airport to the centre of Barcelona, the cheapest of which is to take the train located outside just a few minutes walking distance past the parking lots at terminal 2 (there is a free bus between terminal 1 and terminal 2, see this map of the airport). It is possible to buy 10 packs of train tickets which makes it cheaper. Taking the bus to the centre of town is more convenient as they leave directly from terminal 1 and 2, however it is more expensive than the train.
+
Rail: The Renfe commuter train leaves the airport every 30 minutes from 6.13 a.m. to 11.40 p.m. Tickets cost around 3€ and the journey to the centre of Barcelona (Sants or Plaça Catalunya stations) takes 20 minutes.
+
Bus: The Aerobus leaves the airport every 12 minutes, from 6.00 a.m. to 24.00, Monday to Friday, and from 6.30 a.m. to 24.00 on Saturdays, Sundays and public holidays. Tickets cost 6€ and the journey ends in Plaça Catalunya in the centre of Barcelona.
+
Taxi: From the airport, you can take one of Barcelona’s typical black and yellow taxis. Taxis may not take more than four passengers. Unoccupied taxis display a green light and have a clearly visible sign showing LIBRE or LLIURE. The trip to Sants train station costs approximately €16 and trips to other destinations in the city cost approximately €18.
+
Train and bus: Barcelona has two international train stations: Sants and França. Bus companies have different points of arrival in different parts of the city. You can find detailed information in the following link: http://www.barcelona-airport.com/eng/transport_eng.htm
+
+ Event dates: 16 Aug 2021 16:00 -- 16 Aug 2021 20:00 (local timezone)
+
+
+
+
LDBC was hosting a one-day hybrid workshop, co-located with VLDB 2021 on August 16 (Monday) between 16:00–20:00 CEST.
+
The physical part of the workshop was held in room Akvariet 2 of the Tivoli Hotel (Copenhagen), while the virtual part was hosted on Zoom. Our programme consisted of talks that provide an overview of LDBC’s recent efforts. Moreover, we have invited industry practitioners and academic researchers to present their latest results.
+
Talks were scheduled to be 10 minutes with a short Q&A session. We had three sessions. Their schedules are shown below.
+
[16:00–17:25 CEST] LDBC updates, benchmarks, query languages
The LDBC consortium are pleased to announce the fourth Technical User Community (TUC) meeting.
+
This will be a one-day event at CWI in Amsterdam on Thursday April 3, 2014.
+
The event will include:
+
+
Introduction to the objectives and progress of the LDBC project.
+
Description of the progress of the benchmarks being evolved through Task Forces.
+
Users explaining their use-cases and describing the limitations they have found in current technology.
+
Industry discussions on the contents of the benchmarks.
+
+
All users of RDF and graph databases are welcome to attend. If you are interested, please contact: ldbc AT ac DOT upc DOT edu
+
For presenters please limit your talks to just 15 minutes
+
Agenda
+
April 3rd
+
+
+
10:00 Peter Boncz (VUA) – pptx, video: LDBC project status update
+
+
+
10:20 Norbert Martinez (UPC) – pdf, video: Status update on the LDBC Social Network Benchmark (SNB) task force.
+
+
+
10:50 Alexandru Iosup (TU Delft) – ppt, video: Towards Benchmarking Graph-Processing Platforms
+
+
+
11:10 Mike Bryant (Kings College) – pptx, video: EHRI Project: Archival Integration with Neo4j
+
+
+
11:30 coffee
+
+
+
11:50 Thilo Muth (University of Magdeburg) – pptx, video: MetaProteomeAnalyzer: a graph database backed software for functional and taxonomic protein data analysis
+
+
+
12:10 Davy Suvee (Janssen Pharmaceutica / Johnson & Johnson) – video: Euretos Brain - Experiences on using a graph database to analyse data stored as a scientific knowledge graph
+
+
+
12:30 Yongming Luo (TU Eindhoven) – pdf, video: Regularities and dynamics in bisimulation reductions of big graphs
+
+
+
12:50 Christopher Davis (TU Delft) – pdf, video: Enipedia - Enipedia is an active exploration into the applications of wikis and the semantic web for energy and industry issues
15:00 Bastiaan Bijl (Sysunite) – pdf, video: Using a semantic approach for monitoring applications in large engineering projects
+
+
+
15:20 Frans Knibbe (Geodan) – pptx, video: Benchmarks for geographical data
+
+
+
15:40 Armando Stellato (University of Rome, Tor Vergata & UN Food and Agriculture Organization) – pptx, video: VocBench2.0, a Collaborative Environment for SKOS/SKOS-XL Management: scalability and (inter)operatibility challenges
+
+
+
16:00 coffee
+
+
+
16:20 Ralph Hodgson (TopQuadrant) – [pdf](https://pu b-3834 10a98aef4cb686f0c7601eddd25f.r2.dev/event/fourth-tuc-meeting/attachment s/5538064/5506367.pdf), video:Customer experiences in implementing SKOS-based vocabularymanagement systems
+
+
+
16:40 Simon Jupp (European Bioinformatics Institute) – pdf, video: [Delivering RDF for the life science at the European Bioinformatics Institute: Six months in.]
+
+
+
17:00 Jerven Bolleman (Swiss Institute of Bioinformatics) – pdf, video: Breakmarking UniProt RDF. SPARQL queries that make your database cry…
+
+
+
17:20 Rein van ’t Veer (Digital Heritage Netherlands) – pptx, videoTime and space for heritage
The meeting will be held at the Dutch national research institute for computer science and mathematics (CWI - Centrum voor Wiskunde en Informatica). It is located at Amsterdam Science Park:
Amsterdam has a well-functioning and nearby airport called Schiphol (AMS, www.schiphol.nl) that serves all main European carriers and also very many low-fare carriers.
Trains (~5 per hour) are the most convenient means of transport between Schiphol airport and Amsterdam city center, the Centraal Station (17 minutes, a train every 15 minutes) – which station you are also likely arriving at in case of an international train trip.
+
From the Centraal Station in Amsterdam, there is a direct train (every half an hour, runs 11 minutes) to the Science Park station, which is walking distance of CWI. If you go from the Centraal Station to one of the hotels, you should take tram 9 – it starts at Centraal Station (exception: for Hotel Casa 400, you should take the metro to Amstel station - any of the metros will do).
+
Taxi is an alternative, though expensive. The price from Schiphol will be around 45 EUR to the CWI or another point in the city center (depending on traffic, the ride is 20-30 minutes).
+
Public transportation (tram, bus, metro) tickets for a single ride and 1-day (24 hour) passes can be purchased from the driver/conductor on trams and buses (cash only) and from vending machines in the metro stations.
+
Only the “disposable” cards are interesting for you as visitor.
+
Multi-day (up to 7-days/168 hours) passes can only be purchased from the vending machines or from the ticket office opposite of Centraal Station.
+
Getting Around: the fastest way to move in the city of Amsterdam generally is by bicycle. Consider renting such a device at your hotel. For getting from your hotel to the CWI, you can either take a taxi (expensive), have a long walk (35min), use public transportation (for NH Tropen/The Manor take bus 40 from Muiderpoort Station, for Hotel Casa 400 same bus 40 but from Amstel station, and for the Rembrandt Hotel it is tram 9 until Middenweg/Kruislaan and then bus 40), or indeed bike for 12 minutes.
+
Cars
+
In case you plan to arrive by car, please be aware that parking space in Amsterdam is scarce and hence very expensive. But, you can park your car on the “WCW” terrain where CWI is located. To enter the terrain by car, you have to get a ticket from the machine at the gate. To leave the terrain, again, you can get an exit ticket from the CWI reception.
+
Arriving at CWI: Once you arrive at CWI, you need to meet the reception, and tell them that you are attending the LDBC TUC meeting. Then, you’ll receive a visitor’s pass that allows you to enter our building.
+
Social Dinner
+
The social dinner will take place at 7pm on April 3 in Restaurant Boom (boometenendrinken.nl), Linneausstraat 63, Amsterdam.
Organizers: Oskar van Rest, Alastair Green, Gábor Szárnyas
+
LDBC is hosting a two-day hybrid workshop, co-located with SIGMOD 2023 on June 23-24 (Friday-Saturday).
+
The program consists of 10- and 15-minute talks followed by a Q&A session. The talks will be recorded and made available online. If you would like to participate please register using our form.
Organizers: Gábor Szárnyas, Jack Waudby, Peter Boncz, Alastair Green
+
LDBC is hosting a two-day hybrid workshop, co-located with SIGMOD 2022 on June 17-18 (Friday-Saturday).
+
The program consists of 10-15 minute talks followed by a Q&A session. The talks will be recorded and made available online.
+The tenative program is the following. All times are in EDT.
+
We will have a social event on Friday at 17:30 at El Vez (Google Maps).
LDBC was hosting a one-day hybrid workshop, co-located with VLDB 2021 on August 16 (Monday) between 16:00–20:00 CEST.
+
The physical part of the workshop was held in room Akvariet 2 of the Tivoli Hotel (Copenhagen), while the virtual part was hosted on Zoom. Our programme consisted of talks that provide an overview of LDBC’s recent efforts. Moreover, we have invited industry practitioners and academic researchers to present their latest …
LDBC is pleased to announce its Thirteenth Technical User Community (TUC) meeting.
+
LDBC Technical User Community meetings serve to (1) learn about progress in the LDBC task forces on graph benchmarks and graph standards, (2) to give feedback on these, and (3) hear about user experiences with graph data management technologies or (4) learn about new graph technologies from researchers or industry – LDBC counts Oracle, IBM, Intel, Neo4j, TigerGraph …
LDBC is pleased to announce its Twelfth Technical User Community (TUC) meeting.
+
LDBC Technical User Community meetings serve to (1) learn about progress in the LDBC task forces on graph benchmarks and graph standards, (2) to give feedback on these, and (3) hear about user experiences with graph data management technologies or (4) learn about new graph technologies from researchers or industry – LDBC counts Oracle, IBM, Intel, Neo4j, …
LDBC Technical User Community meetings serve to (1) learn about progress in the LDBC task forces on graph benchmark development, (2) to give feedback on these, and (3) hear about user experiences with graph data management technologies or (4) learn about new graph technologies from researchers or industry – LDBC counts Oracle, IBM, Intel, Neo4j and Huawei among its members.
+
This TUC meeting will be a one-day event preceding the SIGMOD/PODS …
+ Location: SAP Headquarters in Walldorf Germany
+
+
+
+ Event dates: 09 Feb 2017 15:07 -- 10 Feb 2017 15:07 (local timezone)
+
+
+
+
LDBC is pleased to announce its Ninth Technical User Community (TUC) meeting.
+
This will be a two-day event at SAP Headquarters in Walldorf, Germany on February 9+10, 2017.
+
This will be the third TUC meeting after the finalisation of the LDBC FP7 EC funded project. The event will basically set the following aspects:
+
+
Two day event with one day devoted to User’s experiences and one day devoted to benchmarking experiences.
+
Presentation of the benchmarking results for the different benchmarks.
+
Interaction with the new LDBC Board of Directors and the LDBC organisation officials.
+
+
We welcome all users of RDF and Graph technologies to attend. If you are interested, please, contact Damaris Coll (UPC) at damaris@ac.upc.edu;
+
In the agenda, there will be talks given by LDBC members and LDBC activities, but there will also be room for a number of short 20-minute talks by other participants. We are specifically interested in learning about new challenges in graph data management (where benchmarking would become useful) and on hearing about actual user stories and scenarios that could inspire benchmarks. Further, talks that provide feedback on existing benchmark (proposals) are very relevant. But nothing is excluded a priori if it is related to graph data management. Talk proposals can be forwarded to Damaris as well and will be handled by Peter Boncz and Larri.
+
Further, we call on you if you or your colleagues would happen to have contacts with companies that deal with graph data management scenarios to also attend and possibly present. LDBC is always looking to expand its circle of participants in TUCs meeting, its graph technology users contacts but also eventually its membership base.
+
Agenda
+
In the TUC meeting there will be
+
+
updates on progress with LDBC benchmarks, specifically the Social Network Benchmark (SNB) and its Interactive, Business Inalytics and Graphalytics workloads.
+
talks by data management practitioners highlighting graph data management challenges
+
selected scientific talks on graph data management technology
+
+
The meeting will start on Thursday morning, with a program from 09:00-18:00, interrupted by a lunch break.
+
Thursday evening (19:00-21:00) there will be a social dinner in Heidelberg.
+
Friday morning the event resumes from 9:00-12:00. In the afternoon, there is a (closed) LDBC Board of Directors meeting (13:00-16:30) at the same venue.
There are two airports close to SAP’s headquarter: Frankfurt Airport (FRA) and Stuttgart-Echterdingen Airport (STR). The journey from Frankfurt Airport to SAP headquarters takes about one hour by car, while it takes slightly longer from Stuttgart- Echterdingen Airport. Concerning airfare, flights to Frankfurt are usually somewhat more expensive than to Stuttgart.
+
When booking flights to Frankfurt, you should be aware of Frankfurt-Hahn Airport (HHN), which serves low-cost carriers but is not connected to Frankfurt Airport. Frankfurt Hahn is approximately one hour from the Frankfurt main airport by car.
+
The journey from Frankfurt Airport to SAP headquarters takes about one hour by car (95 kilometers, or 59 miles).
+
Journey time from Stuttgart-Echterdingen Airport to SAP headquarters takes about 1 hour and 15 minutes by car (115 kilometers, or 71 miles).
+
Driving directions
+
Traveling from Frankfurt Airport (FRA) to SAP Headquarters:
+
Directions to SAP headquarters:
+
+
When leaving the airport, follow the highway symbol onto “A3/Würzburg/A5/Kassel/Basel/Frankfurt.”
+
Follow the A5 to “Basel/Karlsruhe/Heidelberg.”
+
Take exit 39 – “Walldorf/Wiesloch.”
+
Turn left onto B291.
+
Turn right onto Dietmar-Hopp-Allee.
+
+
(Should you use a navigational system which does not recognize the street name ‘Dietmar-Hopp-Allee’ please use ‘Neurottstrasse’ instead.)
+
Traveling from Stuttgart-Echterdingen Airport (STR) to SAP Headquarters:
+
To get to SAP headquarters by car, there are two possible routes to take. The first leads you via Heilbronn and the second via Karlsruhe. The route via Karlsruhe is a bit shorter yet may be more congested.
+
Directions to SAP headquarters:
+
+
When leaving the airport, follow the highway symbol onto “A8/Stuttgart/B27.”
+
Stay on A8 and follow the sign for “Karlsruhe/Heilbronn/Singen/A8.”
+
Follow A8 to Karlsruhe.
+
Take exit 41 – “Dreieck Karlsruhe” to merge onto A5 toward “Frankfurt/Mannheim/Karlsruhe/Landau (Pfalz).”
+
Take exit 39 – “Walldorf/Wiesloch.”
+
Turn left onto B291.
+
Turn right onto Dietmar-Hopp-Allee.
+
+
Parking
+
The closest parking lot to the event location is P7 (see figure above).
+
By Train
+
As the infrastructure is very well developed in Europe, and in Germany in particular, taking the train is a great and easy way of traveling. Furthermore, the trains usually run on time, so this mode of travel is very convenient, especially for a group of people on longer journeys to major cities.
+
From Frankfurt Airport (FRA) to SAP Headquarters
+
Directions to SAP headquarters:
+
+
Go to Terminal 1, level T (see overview in Appendix).
+
Go to the AIRail Terminal – “Fernbahnhof” (long-distance trains).
+
Choose a connection with the destination train station “Wiesloch–Walldorf”.
+
From station “Wiesloch–Walldorf,” take bus number 707 or 721 toward “Industriegebiet Walldorf, SAP.” It is a 10-minute ride to reach bus stop ‘SAP headquarters’.
+
+
From Stuttgart-Echterdingen Airport (STR) to SAP Headquarters
+
Directions to SAP headquarters:
+
+
Go to the S-Bahn station in the airport, following the sign (station is called “Stuttgart Flughafen/Messe”).
+
Take train number S2 or S3 to “Stuttgart Hauptbahnhof” (main station).
+
From Stuttgart Hauptbahnhof choose a connection with the destination train station “Wiesloch–Walldorf”.
+
From station “Wiesloch–Walldorf,” take bus number 707 or 721 toward “Industriegebiet Walldorf, SAP”. It is a 10-minute ride to reach bus stop ‘SAP headquarters’.
The LDBC consortium is pleased to announce its Eighth Technical User Community (TUC) meeting.
+
This will be a two-day event/eighth-tuc-meeting/attachments at Oracle Conference Center in Redwood Shores facility on Wednesday and Thursday June 22-23, 2016.
+
This will be the second TUC meeting after the finalisation of the LDBC FP7 EC funded project. The event/eighth-tuc-meeting/attachments will basically set the following aspects:
The LDBC consortium are pleased to announce its Sixth Technical User Community (TUC) meeting.
+
This will be a two-day event at Universitat Politècnica de Catalunya, Barcelona on Thursday and Friday March 19/20, 2015.
+
The LDBC FP7 EC funded project is reaching its finalisation, and this will be the last event sponsored directly by the project. However, tasks within LDBC will continue based on the LDBC independent organisation. The event will …
The LDBC consortium is pleased to announce the third Technical User Community (TUC) meeting!
+
This will be a one day event in London on the 19 November 2013 running in collaboration with the GraphConnect event (18/19 November). Registered TUC participants that would like a free pass to all of GraphConnect should register for GraphConnect using this following coupon code: LDBCTUC.
The LDBC consortium are pleased to announce the first Technical User Community (TUC) meeting. This will be a two day event in Barcelona on the 19/20th November 2012.
+
So far more than six commercial consumers of graph/RDF database technology have expressed an interest in attending the event and more are welcome. The proposed format of the event wil include:
+
+
Introduction by the coordinator and technical director explaining the objectives of the …
15:30 Fabian Suchanek (MPI): “YAGO: A large knowledge base from Wikipedia and WordNet”
+16:00 Antonis Loziou (VUA): The OpenPHACTS approach to data integration
+16:30 Mirko Kämpf (Brox): “GeoKnow - Spatial Data Web project and Supply Chain Use Case”
11:30 Peter Boncz (VUA) moderates: next steps in the Social Networking Task Force
+12:00 Barry Bishop (Ontotext) moderates: next steps in the Semantic Publishing Task Force"
+
12:30 End of meeting
+
Logistics
+
Date
+
22nd and 23th April 2013
+
Location
+
The TUC meeting will be held at LE009 room at LRZ (Leibniz-Rechenzentrum) located inside the TU Munich campus in Garching, Germany. The address is:
To reach the campus, there are several options, including Taxi and Subway Ubahn
+
Getting to the TUM Campus from the Munich city center: Subway (U-Bahn)
+
Take the U-bahn line U6 in the direction of Garching-Forschungszentrum, exit at the end station. Take the south exit to MI-Building and LRZ on the Garching Campus. The time of the journey from the city center is approx. 25-30 minutes. In order to get here from the City Center, you need the Munich XXL ticket that costs around 7.50 euros and covers all types of transportation for one day. The ticket has to be validated before ride.
+
Getting to the TUM Campus from the Munich Airport
+
+
+
(except weekends) S-Bahn S8 line in the direction of (Hauptbahnhof) Munich Central Station until the third stop, Ismaning (approx. 13 minutes). From here Bus Nr. 230 until stop MI-Building on the Garching Campus. Alternatively: S1 line until Neufahrn, then with the Bus 690, which stops at Boltzmannstraße.
+
+
+
S-Bahn lines S8 or S1 towards City Center until Marienplatz stop. Then change to U-bahn U6 line towards Garching-Forschungszentrum, exit at the last station. Take the south exit to MI-Building and LRZ.
+
+
+
Taxi: fare is ca. 30-40 euros.
+
+
+
For cases 1 and 2, before the trip get the One-day Munich Airport ticket and validate it. It will cover all public transportation for that day.
+
Getting to the TUM Campus from Garching: U-Bahn
+
The city of Garching is located on the U6 line, one stop before the Garching-Forschungszentrum. In order to get from Garching to Garching-Forschungszentrum with the U-bahn, a special one-way ticket called Kurzstrecke (1.30 euros) can be purchased.
Flying: Munich airport is located 28.5 km northeast of Munich. There are two ways to get from the airport to the city center: suburban train (S-bahn) and Taxi.
+
S-Bahn: S-bahn lines S1 and S8 will get you from the Munich airport to the city center, stopping at both Munich Central Station (Hauptbahnhof) and Marienplatz. One-day Airport-City ticket costs 11.20 euros and is valid for the entire Munich area public transportation during the day of purchase (the tickets needs to be validated before the journey). S-bahn leaves every 5-20 minutes and reaches the city center in approx. 40 minutes.
+
Taxi: taxi from the airport to the city center costs approximately 50 euros
+
Social Dinner
+
The social dinner will take place at 7 pm on April 22 in Hofbräuhaus (second floor)
+
+ Event dates: 09 Nov 2015 14:17 -- 10 Nov 2015 14:17 (local timezone)
+
+
+
+
The LDBC consortium is pleased to announce its Seventh Technical User Community (TUC) meeting.
+
This will be a two-day event at IBM’s TJ Watson facility on Monday and Tuesday November 9/10, 2015.
+
This will be the first TUC meeting after the finalisation of the LDBC FP7 EC funded project. The event will basically set the following aspects:
+
+
Two day event with one day devoted to User’s experiences and one day devoted to benchmarking experiences.
+
Presentation of the benchmarking results for the different benchmarks.
+
Interaction with the new LDBC Board of Directors and the LDBC organisation officials.
+
+
We welcome all users of RDF and Graph technologies to attend. If you are interested, please, contact Damaris Coll (UPC) at damaris@ac.upc.edu; in order to notify IBM security in advance, registration requests need to be in by Nov 1.
+
In the agenda, there will be talks given by LDBC members and LDBC activities, but there will also be room for a number of short 20-minute talks by other participants. We are specifically interested in learning about new challenges in graph data management (where benchmarking would become useful) and on hearing about actual user stories and scenarios that could inspire benchmarks. Further, talks that provide feedback on existing benchmark (proposals) are very relevant. But nothing is excluded a priori if it is graph data management related. Talk proposals can be forwarded to Damaris as well and will be handled by Peter Boncz and Larri.
+
Further, we call on you if you or your colleagues would happen to have contacts with companies that deal with graph data management scenarios to also attend and possibly present. LDBC is always looking to expand its circle of participants in TUCs meeting, its graph technology users contacts but also eventually its membership base.
+
In this page, you’ll find information about the following items:
16:00 Kavitha Srinivas (IBM). SQLGraph: An efficient relational based property graph store
+
16:30 David Ediger (GeorgiaTech). STINGER
+
17:00 Gary King (Franz Inc.). AllegroGraph’s SPARQL implementation with Social Network Analytics abilities using Magic Properties
+
17:30 Manoj Kumar (IBM). Linear Algebra Formulation for Large Graph Analytics
+
18:00 Reihaneh Amini (Wright State University) Linked Data in the GeoLink Usecase
+
19:00 Social dinner
+
Tuesday 10th November 2015
+
9:00 - 10:30 Technology, Applications and Benchmarking (chair Alexandru Iosup)
+
9:00 Philip Rathle (Neo). On openCypher
+
9:20 Morteza Shahriari (University of Florida). Multi-modal Probabilistic Knowledge Base for Remote Sensing Species Identification
+
9:50 Peter Kogge (Notre Dame). Challenging problems with Lexis Nexis Risk Solutions
+
10:10 Arnau Prat (DAMA-UPC). DATAGEN, status and perspectives for synthetic data generation
+
10:30 - 11:00 Coffee break
+
11:00 - 12:45 Applications and use of Graph Technologies (chair Atanas Kiryakov)
+
11:00 Hassan Chafi (Oracle). Status and characteristics of PGQL
+
11:20 David Guedalia (TAGIIO). Multi-tier distributed mobile applications and how they split their workload,
+
11:40 Guojing Cong (IBM). Algorithmic technique and architectural support for fast graph analysis
+
12:00 Josep Lluis Larriba-Pey. Conclusions for the TUC meeting and future perspectives
+
12:30 - 14:00 Lunch break
+
14:00 LDBC Board of Directors
+
Logistics
+
Date
+
9th and 10th November 2015
+
Venue
+
The TUC meeting will be held in the IBM Thomas J Watson Research Center.
+The address is:
+
IBM Thomas J Watson Research Center
+1101 Kitchawan Rd,
+Yorktown Heights, NY 10598, USA
+
If you are using a GPS system, please enter “200 Aqueduct Road, Ossining NY, 10562” for accurate directions to the lab entrance. You may also want to check the routing online.
+
The meeting will take place in the Auditorium on November 9th, and in Meeting Room 20-043 on November 10th.
+
Maps and situation
+
You are highly suggested to rent a car for your convenience, since the public transportation system does not cover this area very well. Besides, there is no hotel within walkable distance to the IBM T.J. Watson Research Center. Feel free to find carpool with other attendees. You may find car rental and hotels through www.orbitz.com, or www.expedia.com Feel free to email yxia@us.ibm.com for any questions.
+
+
Getting there
+
Upper and Eastern New England
+
Route I-84 west to Route I-684, south to Exit 6, west on Route 35 to Route 100, south to Route 134, west 2.5 miles. IBM is on the left.
+
New Haven and Connecticut Shores
+
Merritt Parkway or New England Thruway (Route I-95) west to Route I-287, west to Exit 3, north on Sprain Brook Parkway, which merges into Taconic State Parkway, north to Ossining/Route 134 exit. Turn right and proceed east on Route 134 several hundred yards. IBM is on the right.
+
New Jersey
+
Take New York State Thruway (Route I-87) east across the Tappan Zee Bridge and follow signs to the Saw Mill Parkway north. Proceed north on Saw Mill River Parkway to Taconic State Parkway exit, north to Ossining/Route 134 exit. Turn right and proceed east on Route 134 several hundred yards. IBM is on the right.
+
Upstate New York
+
Route I-84 east across Newburgh-Beacon Bridge to Exit 16-S. Taconic State Parkway south to Route 134 East exit. Turn right and proceed east on Route 134 several hundred yards. IBM is on the right.
+
New York City (Manhattan)
+
Henry Hudson Parkway north, which becomes Saw Mill River Parkway, north to Taconic State Parkway exit. North on Taconic State Parkway to Ossining/Route 134 exit. Turn right and proceed east on Route 134 several hundred yards. IBM is on the right.
+
John F. Kennedy International Airport
+
North on Van Wyck Expressway to the Whitestone Expressway and continue north across the Bronx-Whitestone Bridge to the Hutchinson River Parkway north to the Cross County Parkway exit and proceed west to the Bronx River Parkway. North on the Bronx River Parkway to the Sprain Brook Parkway, which merges into the Taconic State Parkway. Continue north to Ossining/Route 134 exit. Turn right and proceed east on Route 134 several hundred yards. IBM is on the right.
+
LaGuardia Airport
+
East on the Grand Central Parkway, north on the Whitestone Expressway, and continue north across the Bronx-Whitestone Bridge. Continue with instructions from John F. Kennedy International Airport, above.
+
Newark International Airport
+
North on the New Jersey Turnpike (Route I-95). Stay in local lanes and take Exit 72 for Palisades Interstate Parkway. North on the Palisades Interstate Parkway to the New York State Thruway, Route I-87, and east across the Tappan Zee Bridge. Continue with instructions from New Jersey, above.
+
Stewart International Airport
+
Route 207 east to Route I-84, east across Newburgh-Beacon Bridge to Taconic State Parkway, south. Continue with instructions from Upstate New York, above.
+
Westchester County Airport
+
Right on Route 120, north. Turn left where Route 120 merges with Route 133. Continue on Route 120. Cross Route 100 and continue straight on Shingle House Road to Pines Bridge Road. Turn right and proceed several hundred yards. IBM is on the left.
+
Public Transportation
+
Metropolitan Transportation Authority (MTA) train stations nearest to the Yorktown Heights location are the Croton-Harmon and White Plains stations. Taxi service is available at both locations.
by Oskar van Rest, Gábor Szárnyas / on 14 Feb 2023
+
+
+
+
+ Location: Seattle
+
+
+
+ Event dates: 23 Jun 2023 09:00 -- 24 Jun 2023 18:00 (local timezone)
+
+
+
+
Organizers: Oskar van Rest, Alastair Green, Gábor Szárnyas
+
LDBC is hosting a two-day hybrid workshop, co-located with SIGMOD 2023 on June 23-24 (Friday-Saturday).
+
The program consists of 10- and 15-minute talks followed by a Q&A session. The talks will be recorded and made available online. If you would like to participate please register using our form.
In addition, AWS will host a Happy Hour (rooftop grill with beverages) on Saturday on the Amazon Nitro South building’s 8th floor deck: 2205 8th Ave, Seattle, WA 98121.
+
+ Event dates: 19 Mar 2015 13:53 -- 20 Mar 2015 13:53 (local timezone)
+
+
+
+
The LDBC consortium are pleased to announce its Sixth Technical User Community (TUC) meeting.
+
This will be a two-day event at Universitat Politècnica de Catalunya, Barcelona on Thursday and Friday March 19/20, 2015.
+
The LDBC FP7 EC funded project is reaching its finalisation, and this will be the last event sponsored directly by the project. However, tasks within LDBC will continue based on the LDBC independent organisation. The event will basically set the following aspects:
+
+
Two day event with one day devoted to User’s experiences and one day devoted to benchmarking experiences.
+
Presentation of the first benchmarking results for the different benchmarks.
+
Interaction with the new LDBC Board of Directors and the whole new LDBC organisation officials.
+
Pre-event with the 3rd Graph-TA workshop organised on March 18th at the same premises, with a lot of interaction and interesting research presentations.
+
+
We welcome all users of RDF and Graph technologies to attend. If you are interested, please, contact damaris@ac.upc.edu.
To reach the campus, there are several options, including Taxi, Metro and Bus.
+
+
Finding UPC
+
+
Finding the meeting room
+
Getting there
+
Flying: Barcelona airport is situated 12 km from the city. There are several ways of getting from the airport to the centre of Barcelona, the cheapest of which is to take the train located outside just a few minutes walking distance past the parking lots at terminal 2 (there is a free bus between terminal 1 and terminal 2, see this map of the airport). It is possible to buy 10 packs of train tickets which makes it cheaper. Taking the bus to the centre of town is more convenient as they leave directly from terminal 1 and 2, however it is more expensive than the train.
+
Rail: The Renfe commuter train leaves the airport every 30 minutes from 6.13 a.m. to 11.40 p.m. Tickets cost around 3€ and the journey to
+the centre of Barcelona (Sants or Plaça Catalunya stations) takes 20 minutes.
+
Bus: The Aerobus leaves the airport every 12 minutes, from 6.00 a.m. to 24.00, Monday to Friday, and from 6.30 a.m. to 24.00 on Saturdays, Sundays and public holidays. Tickets cost 6€ and the journey ends in Plaça Catalunya in the centre of Barcelona.
+
Taxi: From the airport, you can take one of Barcelona’s typical black and yellow taxis. Taxis may not take more than four passengers. Unoccupied taxis display a green light and have a clearly visible sign showing LIBRE or LLIURE. The trip to Sants train station costs approximately €20 and trips to other destinations in the city cost approximately €25-30.
+
Train and bus: Barcelona has two international train stations: Sants and França. Bus companies have different points of arrival in different parts of the city. You can find detailed information in the following link: http://www.barcelona-airport.com/eng/transport_eng.htm
This will be a one-day event at the VLDB 2017 conference in Munich, Germany on September 1, 2017.
+
Topics and activities of interest in these TUC meetings are:
+
+
Presentation on graph data management usage scenarios.
+
Presentation of the benchmarking results for the different benchmarks, as well as the graph query language task force.
+
Interaction with the new LDBC Board of Directors and the LDBC organisation officials.
+
+
We welcome all users of RDF and Graph technologies to attend. If you are interested to attend the event, please, contact Adrian Diaz (UPC) at adiaz@ac.upc.edu to register; registration is free, but required.
+
In the agenda, there will be talks given by LDBC members and LDBC activities, but there will also be room for a number of short 20-minute talks by other participants. We are specifically interested in learning about new challenges in graph data management (where benchmarking would become useful) and on hearing about actual user stories and scenarios that could inspire benchmarks. Further, talks that provide feedback on existing benchmark (proposals) are very relevant. But nothing is excluded a priori if it is related to graph data management. Talk proposals are handled by Peter Boncz and Larri.
+
Further, we call on you if you or your colleagues would happen to have contacts with companies that deal with graph data management scenarios to also attend and possibly present. LDBC is always looking to expand its circle of participants in TUCs meeting, its graph technology users contacts but also eventually its membership base.
+
Agenda
+
In the TUC meeting there will be:
+
+
updates on progress with LDBC benchmarks, specifically the Social Network Benchmark (SNB) and its Interactive, Business Intelligence and Graphalytics workloads.
+
talks by data management practitioners highlighting graph data management challenges
+
selected scientific talks on graph data management technology
+
+
The meeting will start on Friday morning, with a program from 10:30-17:00
on Friday evening (19:00-21:00) there will be a social dinner at Löwenbräukeller, sponsored and arranged by LDBC member Huawei (who have their European Research Center in Munich).
+
on Friday morning (8:30-10:30) there will be a meeting of the LDBC board of directors, but this meeting is not public.
+
+
Venue
+
The Technical University of Munich (TUM) is hosting that week the VLDB conference; on the day of the TUC meeting the main conference will have finished, but there will be a number of co-located workshops ongoing, and the TUC participants will blend in with that crowd for the breaks and lunch.
+
The TUC meeting will be held in in Room 2607 alongside the VLDB workshops that day (MATES, ADMS, DMAH, DBPL and BOSS).
+
address: Technische Universität München (TUM), Arcisstraße 21, 80333 München
The LDBC consortium is pleased to announce the third Technical User Community (TUC) meeting!
+
This will be a one day event in London on the 19 November 2013 running in collaboration with the GraphConnect event (18/19 November). Registered TUC participants that would like a free pass to all of GraphConnect should register for GraphConnect using this following coupon code: LDBCTUC.
+
The TUC event will include:
+
+
Introduction to the objectives and progress of the LDBC project
+
Description of the progress of the benchmarks being evolved through Task Forces
+
Users explaining their use-cases and describing the limitations they have found in current technology
+
Industry discussions on the contents of the benchmarks
+
+
We will also be launching the LDBC non-profit organization, so anyone outside the EU project will be able to join as a member.
+
We will kick off new benchmark development task forces in the coming year, and talks at this coming TUC will play an important role in deciding the use case scenarios that will drive those benchmarks.
+
All users of RDF and graph databases are welcome to attend. If you are interested, please contact: ldbc AT ac DOT upc DOT edu
Thomas Scharrenbach (University Zurich) PCKS: Benchmarking Semantic Flow Processing Systems
+
+
17:20 Meeting Conclusion (Josep Larriba Pey)
+
17:30 End of TUC meeting
+
19:00 Social dinner
+
November 20th - Internal LDBC Meeting
+
10:00 Start
+
12:30 End of meeting
+
+
coffee and lunch provided
+
+
Logistics
+
Date
+
19th November 2013
+
Location
+
The TUC meeting will be held in The Tower hotel (Google Maps link) approximately 4 minutes walk from the GraphConnect conference in London.
+
Getting there
+
+
From City Airport is the easiest: short ride on the DLR to Tower Gateway. Easy.
+
From London Heathrow: first need to take the Heathrow Express to Paddington. Then take the Circle line to Tower Hill. See attached.
+
+
LDBC/TUC Background
+
Looking back, we have been working on two benchmarks for the past year: a Social Network Benchmark (SNB) and a Semantic Publishing Benchmark (SPB). While below we provide a short summary, all the details of the work on these benchmark development efforts can be found in the first yearly progress reports:
A summary of these efforts can be read below or, for a more detailed account, please refer to: The Linked Data Benchmark Council: a Graph and RDF industry benchmarking effort. Annual reports about the progress, results, and future work of these two efforts will soon be available for download here, and will be discussed in depth at the TUC.
+
Social Network Benchmark
+
The Social Network Benchmark (SNB) is designed for evaluating a broad range of technologies for tackling graph data management workloads. The systems targeted are quite broad: from graph, RDF, and relational database systems to Pregel-like graph compute frameworks. The social network scenario was chosen with the following goals in mind:
+
+
it should be understandable, and the relevance of managing such data should be understandable
+
it should cover the complete range of interesting challenges, according to the benchmark scope
+
the queries should be realistic, i.e., similar data and workloads are encountered in practice
+
+
SNB includes a data generator for creation of synthetic social network data with the following characteristics:
+
+
data schema is representative of real social networks
+
data generated includes properties occurring in real data, e.g. irregular structure, structure/value correlations, power-law distributions
+
the software generator is easy-to-use, configurable and scalable
+
+
SNB is intended to cover a broad range of aspects of social network data management, and therefore includes three distinct workloads:
+
+
Interactive
+
+
Tests system throughput with relatively simple queries and concurrent updates, it is designed to test ACID features and scalability in an online operational setting.
+
The targeted systems are expected to be those that offer transactional functionality.
+
+
+
Business Intelligence
+
+
Consists of complex structured queries for analyzing online behavior of users for marketing purposes, it is designed to stress query execution and optimization.
+
The targeted systems are expected to be those that offer an abstract query language.
+
+
+
Graph Analytics
+
+
Tests the functionality and scalability of systems for graph analytics, which typically cannot be expressed in a query language.
+
Analytics is performed on most/all of the data in the graph as a single operation and produces large intermediate results, and it is not not expected to be transactional or need isolation.
+
The targeted systems are graph compute frameworks though database systems may compete, for example by using iterative implementations that repeatedly execute queries and keep intermediate results in temporary data structures.
+
+
+
+
Semantic Publishing Benchmark
+
The Semantic Publishing Benchmark (SPB) simulates the management and consumption of RDF metadata that describes media assets, or creative works.
+
The scenario is a media organization that maintains RDF descriptions of its catalogue of creative works – input was provided by actual media organizations which make heavy use of RDF, including the BBC. The benchmark is designed to reflect a scenario where a large number of aggregation agents provide the heavy query workload, while at the same time a steady stream of creative work description management operations are in progress. This benchmark only targets RDF databases, which support at least basic forms of semantic inference. A tagging ontology is used to connect individual creative work descriptions to instances from reference datasets, e.g. sports, geographical, or political information. The data used will fall under the following categories: reference data, which is a combination of several Linked Open Data datasets, e.g. GeoNames and DBpedia; domain ontologies, that are specialist ontologies used to describe certain areas of expertise of the publishing, e.g., sport and education; publication asset ontologies, that describe the structure and form of the assets that are published, e.g., news stories, photos, video, audio, etc.; and tagging ontologies and the metadata, that links assets with reference/domain ontologies.
+
The data generator is initialized by using several ontologies and datasets. The instance data collected from these datasets are then used at several points during the execution of the benchmark. Data generation is performed by generating SPARQL fragments for create operations on creative works and executing them against the RDF database system.
+
Two separate workloads are modeled in SPB:
+
+
Editorial: Simulates creating, updating and deleting creative work metadata descriptions. Media companies use both manual and semi-automated processes for efficiently and correctly managing asset descriptions, as well as annotating them with relevant instances from reference ontologies.
+
Aggregation: Simulates the dynamic aggregation of content for consumption by the distribution pipelines (e.g. a web-site). The publishing activity is described as “dynamic”, because the content is not manually selected and arranged on, say, a web page. Instead, templates for pages are defined and the content is selected when a consumer accesses the page.
LDBC is pleased to announce its Thirteenth Technical User Community (TUC) meeting.
+
LDBC Technical User Community meetings serve to (1) learn about progress in the LDBC task forces on graph benchmarks and graph standards, (2) to give feedback on these, and (3) hear about user experiences with graph data management technologies or (4) learn about new graph technologies from researchers or industry – LDBC counts Oracle, IBM, Intel, Neo4j, TigerGraph and Huawei among its members.
+
This TUC meeting will be a two-day event hosted online. We welcome all users of RDF and Graph technologies to attend. If you are interested to attend the event, please, contact Gabor Szarnyas (BME) to register.
LDBC is pleased to announce its Twelfth Technical User Community (TUC) meeting.
+
LDBC Technical User Community meetings serve to (1) learn about progress in the LDBC task forces on graph benchmarks and graph standards, (2) to give feedback on these, and (3) hear about user experiences with graph data management technologies or (4) learn about new graph technologies from researchers or industry – LDBC counts Oracle, IBM, Intel, Neo4j, TigerGraph and Huawei among its members.
+
This TUC meeting will be a one-day event on the last Friday of SIGMOD/PODS 2019 in Amsterdam, The Netherlands, in the conference venue of Beurs van Berlage. The room is the Mendes da Silva kamer. Please check its tips for accommodation in Amsterdam.
+
Note also that at SIGMOD/PODS in Amsterdam on Sunday, June 30, there is a research workshop on graph data management technology called GRADES-NDA 2019, that may be of interest to our audience (this generally holds for the whole SIGMOD/PODS program, of course).
+
We welcome all users of RDF and Graph technologies to attend. If you are interested to attend the event, please, contact Damaris Coll (UPC) at damaris@ac.upc.edu to register.
+
=> registration is free, but required <=
+
You need to be registered in order to get into the SIGMOD/PODS venue. Friday, July 5, is the final, workshop, day of SIGMOD/PODS, and the LDBC TUC meeting joins the other workshops for coffee and lunch.
+
In the agenda, there will be talks given by LDBC members and LDBC activities, but there will also be room for a number of short 20-minute talks by other participants. We are specifically interested in learning about new challenges in graph data management (where benchmarking would become useful) and on hearing about actual user stories and scenarios that could inspire benchmarks. Further, talks that provide feedback on existing benchmark (proposals) are very relevant. But nothing is excluded a priori if it is related to graph data management.
+
Talk proposals can be sent to Peter Boncz, who is also the local organizer. Please also send your slides to this email for archiving on this site.
+
Further, we call on you if you or your colleagues would happen to have contacts with companies that deal with graph data management scenarios to also attend and possibly present. LDBC is always looking to expand its circle of participants in TUCs meeting, its graph technology users contacts but also eventually its membership base.
+
Agenda
+
In the TUC meeting, there will be:
+
+
updates on progress with LDBC benchmarks, specifically the Social Network Benchmark (SNB) and its Interactive, Business Intelligence and Graphalytics workloads.
+
talks by data management practitioners highlighting graph data management challenges and products
+
+
The morning slot (08:30-10:30) is reserved for an LDBC Board Meeting, to which in principle only LDBC directors are invited (that meeting will be held in the same room).
+
The TUC meeting will start on Friday morning after the morning coffee break of SIGMOD/PODS 2019 (room: Mendes da Silva kamer):
The Financial Benchmark (FinBench) project defines a graph database benchmark targeting financial scenarios such as anti-fraud and risk control. It is maintained by the LDBC FinBench Task Force. The benchmark has one workload currently, Transaction Workload, capturing OLTP scenario with complex read queries that access the …
David Püroja just completed his MSc thesis on creating a design towards SNB Interactive v2 at CWI’s Database Architectures group. David and I gave a deep-dive talk at the FOSDEM conference’s graph developer room titled The LDBC Social Network …
LDBC SNB provides a data generator, which produces synthetic datasets, mimicking a social network’s activity during a period of time. Datagen is defined by the charasteristics of realism, scalability, determinism and usability. More than two years have elapsed since my last technical update on LDBC SNB Datagen, in which I discussed the reasons for moving the code to Apache Spark from the MapReduce-based Apache Hadoop implementation and the …
The Financial Benchmark (FinBench) project aims to define a graph database evaluating benchmark and develop a data generation process and a query driver to make the evaluation of the graph database representative, reliable and comparable, especially in financial scenarios, such as anti-fraud and risk control. The FinBench is scheduled to be released in the …
LDBC’s Social Network Benchmark [4] (LDBC SNB) is an industrial and academic initiative, formed by principal actors in the field of graph-like data management. Its goal is to define a framework where different graph-based technologies can be fairly tested and compared, that can drive the identification of systems’ bottlenecks and required functionalities, and can help researchers open new frontiers in high-performance graph data …
LDBC’s working groups investigate research questions on graph query languages and language extensions for graphs including ISO/IEC SQL/PGQ, released in June 2023, and GQL, scheduled to be released in March 2024.
Apache Flink [1] is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink also builds batch processing on top of the streaming engine, overlaying native iteration support, managed memory, and program optimization.
The number of datasets published in the Web of Data as part of the Linked Data Cloud is constantly increasing. The Linked Data paradigm is based on the unconstrained publication of information by different publishers, and the interlinking of web resources through “same-as” links which specify that two URIs correspond to the same real world object. In the vast number of data sources participating in the Linked Data Cloud, this information is not …
fairly frequent short updates, with no update contention worth mentioning
+
+
+
short random lookups
+
+
+
medium complex queries centered around a person’s social environment
+
+
+
The updates exist so as to invalidate strategies that rely too heavily on precomputation. The short lookups exist for the sake of realism; after all, an …
Next 31st of May the GRADES workshop will take place in Melbourne within the ACM/SIGMOD presentation. GRADES started as an initiative of the Linked Data Benchmark Council in the SIGMOD/PODS 2013 held in New York.
+
Among the papers published in this edition we have “Graphalytics: A Big Data Benchmark for Graph-Processing Platforms”, which presents a new benchmark that uses the Social Network Benchmark data generator of LDBC (that can …
SNB Interactive is the wild frontier, with very few rules. This is necessary, among other reasons, because there is no standard property graph data model, and because the contestants support a broad mix of programming models, ranging from in-process APIs to declarative query.
+
In the case of Virtuoso, we have played with SQL and SPARQL implementations. For a fixed schema and well known workload, SQL will always win. The reason for this is that …
LDBC is presenting two papers at the next edition of the ACM SIGMOD/PODS conference held in Melbourne from May 31st to June 4th, 2015. The annual ACM SIGMOD/PODS conference is a leading international forum for database researchers, practitioners, developers, and users to explore cutting-edge ideas and results, and to exchange techniques, tools and experiences.
+
On the industry track, LDBC will be presenting the Social Network Benchmark Interactive …
This post is the first in a series of blogs analyzing the LDBC Social Network Benchmark Interactive workload. This is written from the dual perspective of participating in the benchmark design and of building the OpenLink Virtuoso implementation of same.
+
With two implementations of SNB interactive at four different scales, we can take a first look at what the benchmark is really about. The hallmark of a benchmark implementation is that its …
In a previous 3-part blog series we touched upon the difficulties of executing the LDBC SNB Interactive (SNB) workload, while achieving good performance and scalability. What we didn’t discuss is why these difficulties were unique to SNB, and what aspects of the way we perform workload execution are scientific contributions - novel solutions to previously unsolved problems. This post will highlight the differences between SNB and more …
As discussed in previous posts, one of the features that makes Datagen more realistic is the fact that the activity volume of the simulated Persons is not uniform, but forms spikes. In this blog entry I want to explain more in depth how this is actually implemented inside of the generator.
+
First of all, I start with a few basics of how Datagen works internally. In Datagen, once the person graph has been created (persons and their relationships), …
This blog entry is about one of the features of DATAGEN that makes it different from other synthetic graph generators that can be found in the literature: the community structure of the graph.
+
When generating synthetic graphs, one must not only pay attention to quantitative measures such as the number of nodes and edges, but also to other more qualitative characteristics such as the degree distribution, clustering coefficient. Real graphs, and …
The Linked Data paradigm has become the prominent enabler for sharing huge volumes of data using Semantic Web technologies, and has created novel challenges for non-relational data management systems, such as RDF and graph engines. Efficient data access through queries is perhaps the most important data management task, and is enabled through query optimization techniques, which amount to the discovery of optimal or close to optimal execution …
When talking about DATAGEN and other graph generators with social network characteristics, our attention is typically borrowed by the friendship subgraph and/or its structure. However, a social graph is more than a bunch of people being connected by friendship relations, but has a lot more of other things is worth to look at. With a quick view to commercial social networks like Facebook, Twitter or Google+, one can easily identify a lot of other …
The SNB Driver part 1 post introduced, broadly, the challenges faced when developing a workload driver for the LDBC SNB benchmark. In this blog we’ll drill down deeper into the details of what it means to execute “dependent queries” during benchmark execution, and how this is handled in the driver. First of all, as many driver-specific terms will be used, below is a listing of their definitions. There is no need to read them in …
Based on the dependencies certain operations have, and on the granularity of parallelism we wish to achieve while executing them, we assign a Dependency Mode and an Execution Mode to every operation type. Using these …
The Semantic Publishing Instance Matching Benchmark (SPIMBench) is a novel benchmark for the assessment of instance matching techniques for RDF data with an associated schema. SPIMBench extends the state-of-the art instance matching benchmarks for RDF data in three main aspects: it allows for systematic scalability testing, supports a wider range of test cases including semantics-aware ones, and provides an enriched gold standard.
We are presently working on the SNB BI workload. Andrey Gubichev of TU Munchen and myself are going through the queries and are playing with two SQL based implementations, one on Virtuoso and the other on Hyper.
+
As discussed before, the BI workload has the same choke points as TPC-H as a base but pushes further in terms of graphiness and query complexity.
+
There are obvious marketing applications for a SNB-like dataset. There are also security …
LDBC’s Semantic Publishing Benchmark (SPB) measures the performance of an RDF database in a load typical for metadata-based content publishing, such as the famous BBC Dynamic Semantic Publishing scenario. Such load combines tens of updates per second (e.g. adding metadata about new articles) with even higher volume of read requests (SPARQL queries collecting recent content and data to generate web page on a specific subject, e.g. Frank …
In previous posts (Getting started with snb, DATAGEN: data generation for the Social Network Benchmark), Arnau Prat discussed the main features and characteristics of DATAGEN: realism, scalability, determinism, usability. DATAGEN is the social network data generator used by the three LDBC-SNB workloads, which produces data simulating the activity in a social network site during a period of time. In this post, we conduct a series of experiments …
In this multi-part blog we consider the challenge of running the LDBC Social Network Interactive Benchmark (LDBC SNB) workload in parallel, i.e. the design of the workload driver that will issue the queries against the System Under Test (SUT). We go through design principles that were implemented for the LDBC SNB workload generator/load tester (simply referred to as driver). Software and documentation for this driver is available here: …
LDBCSPB (Semantic Publishing Benchmark) is based on the BBC linked data platform use case. Thus the data modelling and transaction mix reflects the BBC’s actual utilization of RDF. But a benchmark is not only a condensation of current best practices. The BBC linked data platform is an Ontotext Graph DB deployment. Graph DB was formerly known as Owlim.
+
So, in SPB we wanted to address substantially more complex queries than the lookups that …
The Semantic Publishing Benchmark (SPB), developed in the context of LDBC, aims at measuring the read and write operations that can be performed in the context of a media organisation. It simulates the management and consumption of RDF metadata describing media assets and creative works. The scenario is based around a media organisation that maintains RDF descriptions of its catalogue of creative works. These descriptions use a set of ontologies …
The Linked Data Benchmark Council (LDBC) mission is to design and maintain benchmarks for graph data management systems, and establish and enforce standards in running these benchmarks, and publish and arbitrate around the official benchmark results. The council and its https://ldbcouncil.org website just launched, and in its first 1.5 year of existence, most effort at LDBC has gone into investigating the needs of the field through interaction …
The Linked Data Benchmark Council (LDBC) is reaching a milestone today, June 23 2014, in announcing that two of the benchmarks that it has been developing since 1.5 years have now reached the status of Public Draft. This concerns the Semantic Publishing Benchmark (SPB) and the interactive workload of the Social Network Benchmark (SNB). In case of LDBC, the release is staged: now the benchmark software just runs read-only queries. This will be …
Social Network interaction is amongst the most natural and widely spread activities in the internet society, and it has turned out to be a very useful way for people to socialise at different levels (friendship, professional, hobby, etc.). As such, Social Networks are well understood from the point of view of the data involved and the interaction required by their actors. Thus, the concepts of friends of friends, or retweet are well established …
It is with great pleasure that we announce the new LDBC organisation site at www.ldbcouncil.org. The LDBC started as a European Community FP7 funded project with the objective to create, foster and become an industry reference for benchmarking RDF and Graph technologies. A period of more than one and a half years has led us to the creation of the first two workloads, the Semantic Publishing Benchmark and the Social Network Benchmark in its …
Following the 1st International workshop on Benchmarking RDF Systems (BeRSys 2013) the aim of the BeRSys 2014 workshop is to provide a discussion forum where researchers and industrials can meet to discuss topics related to the performance of RDF systems. BeRSys 2014 is the only workshop dedicated to benchmarking different aspects of RDF engines - in the line of TPCTC series of workshops.The focus of the workshop is to expose and initiate …
As explained in a previous post, the LDBC Social Network Benchmark (LDBC-SNB) has the objective to provide a realistic yet challenging workload, consisting of a social network and a set of queries. Both have to be realistic, easy to understand and easy to generate. This post has the objective to discuss the main features of DATAGEN, the social network data generator provided by LDBC-SNB, which is an evolution of S3G2 [1].
In a previous blog post titled “Is SNB like Facebook’s LinkBench?”, Peter Boncz discusses the design philosophy that shapes SNB and how it compares to other existing benchmarks such as LinkBench. In this post, I will briefly introduce the essential parts forming SNB, which are DATAGEN, the LDBC execution driver and the workloads.
+
DATAGEN
+
DATAGEN is the data generator used by all the workloads of SNB. Here we introduced the …
The LDBC Social Network Benchmark (SNB) is composed of three distinct workloads, interactive, business intelligence and graph analytics. This post introduces the interactive workload.
+
The benchmark measures the speed of queries of medium complexity against a social network being constantly updated. The queries are scoped to a user’s social environment and potentially access data associated with the friends or a user and their friends.
In this post, I will discuss in some detail the rationale and goals of the design of the Social Network Benchmark (SNB) and explain how it relates to real …
Synopsis: Now is the time to finalize the interactive part of the Social Network Benchmark (SNB). The benchmark must be both credible in a real social network setting and pose new challenges. There are many hard queries but not enough representation for what online systems in fact do. So, the workload mix must strike a balance between the practice and presenting new challenges.
+
It is about to be showtime for LDBC. The initial installment of the …
In previous posts (this and this) we briefly introduced the design goals and philosophy behind DATAGEN, the data generator used in LDBC-SNB. In this post, I will explain how to use DATAGEN to generate the necessary datatsets to run LDBC-SNB. Of course, as DATAGEN is continuously under development, the instructions given in this tutorial might change in the future.
+
Getting and Configuring Hadoop
+
DATAGEN runs on top of hadoop 1.2.1 to be scale. …
Note: consider this post as a continuation of the “Making it interactive” post by Orri Erling.
+
I have now completed the Virtuoso TPC-H work, including scale out. Optimization possibilities extend to infinity but the present level is good enough. TPC-H is the classic of all analytics benchmarks and is difficult enough, I have extensive commentary on this on my blog (In Hoc Signo Vinces series), including experimental results. This is, …
During the past six months we (the OWLIM Team at Ontotext) have integrated the LDBC Semantic Publishing Benchmark (LDBC-SPB) as a part of our development and release process.
+
First thing we’ve started using the LDBC-SPB for is to monitor the performance of our RDF Store when a new release is about to come out.
+
Initially we’ve decided to fix some of the benchmark parameters :
+
+
the dataset size - 50 million triples (LDBC-SPB50) * benchmark warmup …
+
+
+
+
+
\ No newline at end of file
diff --git a/images/cube_1.png b/images/cube_1.png
new file mode 100644
index 00000000..4aa4affc
Binary files /dev/null and b/images/cube_1.png differ
diff --git a/images/cube_2.png b/images/cube_2.png
new file mode 100644
index 00000000..24c0f989
Binary files /dev/null and b/images/cube_2.png differ
diff --git a/images/cube_full.png b/images/cube_full.png
new file mode 100644
index 00000000..4494f35b
Binary files /dev/null and b/images/cube_full.png differ
diff --git a/images/favicon.ico b/images/favicon.ico
new file mode 100644
index 00000000..30ec30d8
Binary files /dev/null and b/images/favicon.ico differ
diff --git a/images/favicon.png b/images/favicon.png
new file mode 100644
index 00000000..3df75eb3
Binary files /dev/null and b/images/favicon.png differ
diff --git a/images/ldbc-title.png b/images/ldbc-title.png
new file mode 100644
index 00000000..a09f58fb
Binary files /dev/null and b/images/ldbc-title.png differ
diff --git a/images/ldbc.png b/images/ldbc.png
new file mode 100644
index 00000000..3df75eb3
Binary files /dev/null and b/images/ldbc.png differ
diff --git a/images/members/alibaba-damo-academy.png b/images/members/alibaba-damo-academy.png
new file mode 100644
index 00000000..3cb019aa
Binary files /dev/null and b/images/members/alibaba-damo-academy.png differ
diff --git a/images/members/antgroup.png b/images/members/antgroup.png
new file mode 100644
index 00000000..530d1e31
Binary files /dev/null and b/images/members/antgroup.png differ
diff --git a/images/members/arangodb.png b/images/members/arangodb.png
new file mode 100644
index 00000000..2bf688a9
Binary files /dev/null and b/images/members/arangodb.png differ
diff --git a/images/members/aws.png b/images/members/aws.png
new file mode 100644
index 00000000..50bd7486
Binary files /dev/null and b/images/members/aws.png differ
diff --git a/images/members/birkbeck.jpg b/images/members/birkbeck.jpg
new file mode 100644
index 00000000..944188d0
Binary files /dev/null and b/images/members/birkbeck.jpg differ
diff --git a/images/members/createlink.png b/images/members/createlink.png
new file mode 100644
index 00000000..c0b31a29
Binary files /dev/null and b/images/members/createlink.png differ
diff --git a/images/members/ens-paris.png b/images/members/ens-paris.png
new file mode 100644
index 00000000..70ab294f
Binary files /dev/null and b/images/members/ens-paris.png differ
diff --git a/images/members/fabarta.jpg b/images/members/fabarta.jpg
new file mode 100644
index 00000000..9fe574a4
Binary files /dev/null and b/images/members/fabarta.jpg differ
diff --git a/images/members/forth.png b/images/members/forth.png
new file mode 100644
index 00000000..ef3b6ac3
Binary files /dev/null and b/images/members/forth.png differ
diff --git a/images/members/intel.png b/images/members/intel.png
new file mode 100644
index 00000000..a53621a4
Binary files /dev/null and b/images/members/intel.png differ
diff --git a/images/members/jcc.png b/images/members/jcc.png
new file mode 100644
index 00000000..ad2acd0c
Binary files /dev/null and b/images/members/jcc.png differ
diff --git a/images/members/katana-graph.jpg b/images/members/katana-graph.jpg
new file mode 100644
index 00000000..9e33f038
Binary files /dev/null and b/images/members/katana-graph.jpg differ
diff --git a/images/members/memgraph.png b/images/members/memgraph.png
new file mode 100644
index 00000000..275528ca
Binary files /dev/null and b/images/members/memgraph.png differ
diff --git a/images/members/nebulagraph.png b/images/members/nebulagraph.png
new file mode 100644
index 00000000..34fcb91d
Binary files /dev/null and b/images/members/nebulagraph.png differ
diff --git a/images/members/neo4j.png b/images/members/neo4j.png
new file mode 100644
index 00000000..c4c75320
Binary files /dev/null and b/images/members/neo4j.png differ
diff --git a/images/members/ontotext.jpg b/images/members/ontotext.jpg
new file mode 100644
index 00000000..1692fb75
Binary files /dev/null and b/images/members/ontotext.jpg differ
diff --git a/images/members/oracle-labs.png b/images/members/oracle-labs.png
new file mode 100644
index 00000000..86fe547e
Binary files /dev/null and b/images/members/oracle-labs.png differ
diff --git a/images/members/pometry.png b/images/members/pometry.png
new file mode 100644
index 00000000..53333010
Binary files /dev/null and b/images/members/pometry.png differ
diff --git a/images/members/redis-labs.png b/images/members/redis-labs.png
new file mode 100644
index 00000000..f86a9d3b
Binary files /dev/null and b/images/members/redis-labs.png differ
diff --git a/images/members/relationalai.png b/images/members/relationalai.png
new file mode 100644
index 00000000..7cdbe348
Binary files /dev/null and b/images/members/relationalai.png differ
diff --git a/images/members/sparsity.png b/images/members/sparsity.png
new file mode 100644
index 00000000..a2926454
Binary files /dev/null and b/images/members/sparsity.png differ
diff --git a/images/members/stargraph.png b/images/members/stargraph.png
new file mode 100644
index 00000000..75564389
Binary files /dev/null and b/images/members/stargraph.png differ
diff --git a/images/members/tigergraph.png b/images/members/tigergraph.png
new file mode 100644
index 00000000..fc8e0984
Binary files /dev/null and b/images/members/tigergraph.png differ
diff --git a/images/members/ultipa.png b/images/members/ultipa.png
new file mode 100644
index 00000000..61f8afc8
Binary files /dev/null and b/images/members/ultipa.png differ
diff --git a/images/members/volcengine.png b/images/members/volcengine.png
new file mode 100644
index 00000000..3d2c9371
Binary files /dev/null and b/images/members/volcengine.png differ
diff --git a/images/members/zhejiang-lab.png b/images/members/zhejiang-lab.png
new file mode 100644
index 00000000..b00c0def
Binary files /dev/null and b/images/members/zhejiang-lab.png differ
diff --git a/images/slide-image-1.jpg b/images/slide-image-1.jpg
new file mode 100644
index 00000000..17ff8d92
Binary files /dev/null and b/images/slide-image-1.jpg differ
diff --git a/images/slide-image-2.jpg b/images/slide-image-2.jpg
new file mode 100644
index 00000000..af1cac9f
Binary files /dev/null and b/images/slide-image-2.jpg differ
diff --git a/images/slide-image-3.jpg b/images/slide-image-3.jpg
new file mode 100644
index 00000000..970ebbe7
Binary files /dev/null and b/images/slide-image-3.jpg differ
diff --git a/images/slide-image-4.jpg b/images/slide-image-4.jpg
new file mode 100644
index 00000000..880ea210
Binary files /dev/null and b/images/slide-image-4.jpg differ
diff --git a/images/steering-committee-pictures/alastair-green.jpg b/images/steering-committee-pictures/alastair-green.jpg
new file mode 100644
index 00000000..4a4fa88f
Binary files /dev/null and b/images/steering-committee-pictures/alastair-green.jpg differ
diff --git a/images/steering-committee-pictures/gabor-szarnyas.jpg b/images/steering-committee-pictures/gabor-szarnyas.jpg
new file mode 100644
index 00000000..a47d1d3e
Binary files /dev/null and b/images/steering-committee-pictures/gabor-szarnyas.jpg differ
diff --git a/images/steering-committee-pictures/judy-bingham.jpg b/images/steering-committee-pictures/judy-bingham.jpg
new file mode 100644
index 00000000..d3acc221
Binary files /dev/null and b/images/steering-committee-pictures/judy-bingham.jpg differ
diff --git a/images/steering-committee-pictures/oskar-van-rest.jpg b/images/steering-committee-pictures/oskar-van-rest.jpg
new file mode 100644
index 00000000..a2a58161
Binary files /dev/null and b/images/steering-committee-pictures/oskar-van-rest.jpg differ
diff --git a/images/steering-committee-pictures/peter-boncz.jpg b/images/steering-committee-pictures/peter-boncz.jpg
new file mode 100644
index 00000000..49b1bb77
Binary files /dev/null and b/images/steering-committee-pictures/peter-boncz.jpg differ
diff --git a/images/steering-committee-pictures/shipeng-qi.jpg b/images/steering-committee-pictures/shipeng-qi.jpg
new file mode 100644
index 00000000..e41ea82f
Binary files /dev/null and b/images/steering-committee-pictures/shipeng-qi.jpg differ
diff --git a/index.html b/index.html
new file mode 100644
index 00000000..ac5e4d31
--- /dev/null
+++ b/index.html
@@ -0,0 +1,487 @@
+
+
+
+
+ Linked Data Benchmark Council
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Welcome to the LDBC website
+
On this website, you can find information on benchmark specifications and audited results, on how the LDBC organization works, and benchmarking resources.
LDBC brings industry and users together for developing benchmarks whereby the state-of-the-art and advances in graph database technologies can be assessed and directed.
LDBC SNB provides a data generator, which produces synthetic datasets, mimicking a social network’s activity during a period of time. Datagen is defined by the charasteristics of realism, scalability, …
The Linked Data Benchmark Council (LDBC) is a non-profit organization aiming to define standard graph benchmarks to foster a community around graph processing technologies. LDBC consists of members from both industry and academia, including organizations and individuals.
The Financial Benchmark (FinBench) project defines a graph database benchmark targeting financial scenarios such as anti-fraud and risk control. It is maintained by the LDBC FinBench Task Force. The benchmark has one workload currently, Transaction Workload, capturing OLTP scenario with complex read queries that access the …
David Püroja just completed his MSc thesis on creating a design towards SNB Interactive v2 at CWI’s Database Architectures group. David and I gave a deep-dive talk at the FOSDEM conference’s graph developer room titled The LDBC Social Network …
LDBC SNB provides a data generator, which produces synthetic datasets, mimicking a social network’s activity during a period of time. Datagen is defined by the charasteristics of realism, scalability, determinism and usability. More than two years have elapsed since my last technical update on LDBC SNB Datagen, in which I discussed the reasons for moving the code to Apache Spark from the MapReduce-based Apache Hadoop implementation and the …
The Financial Benchmark (FinBench) project aims to define a graph database evaluating benchmark and develop a data generation process and a query driver to make the evaluation of the graph database representative, reliable and comparable, especially in financial scenarios, such as anti-fraud and risk control. The FinBench is scheduled to be released in the …
LDBC’s Social Network Benchmark [4] (LDBC SNB) is an industrial and academic initiative, formed by principal actors in the field of graph-like data management. Its goal is to define a framework where different graph-based technologies can be fairly tested and compared, that can drive the identification of systems’ bottlenecks and required functionalities, and can help researchers open new frontiers in high-performance graph data …
Apache Flink [1] is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink also builds batch processing on top of the streaming engine, overlaying native iteration support, managed memory, and program optimization.
The number of datasets published in the Web of Data as part of the Linked Data Cloud is constantly increasing. The Linked Data paradigm is based on the unconstrained publication of information by different publishers, and the interlinking of web resources through “same-as” links which specify that two URIs correspond to the same real world object. In the vast number of data sources participating in the Linked Data Cloud, this information is not …
fairly frequent short updates, with no update contention worth mentioning
+
+
+
short random lookups
+
+
+
medium complex queries centered around a person’s social environment
+
+
+
The updates exist so as to invalidate strategies that rely too heavily on precomputation. The short lookups exist for the sake of realism; after all, an …
Next 31st of May the GRADES workshop will take place in Melbourne within the ACM/SIGMOD presentation. GRADES started as an initiative of the Linked Data Benchmark Council in the SIGMOD/PODS 2013 held in New York.
+
Among the papers published in this edition we have “Graphalytics: A Big Data Benchmark for Graph-Processing Platforms”, which presents a new benchmark that uses the Social Network Benchmark data generator of LDBC (that can …
SNB Interactive is the wild frontier, with very few rules. This is necessary, among other reasons, because there is no standard property graph data model, and because the contestants support a broad mix of programming models, ranging from in-process APIs to declarative query.
+
In the case of Virtuoso, we have played with SQL and SPARQL implementations. For a fixed schema and well known workload, SQL will always win. The reason for this is that …
LDBC is presenting two papers at the next edition of the ACM SIGMOD/PODS conference held in Melbourne from May 31st to June 4th, 2015. The annual ACM SIGMOD/PODS conference is a leading international forum for database researchers, practitioners, developers, and users to explore cutting-edge ideas and results, and to exchange techniques, tools and experiences.
+
On the industry track, LDBC will be presenting the Social Network Benchmark Interactive …
This post is the first in a series of blogs analyzing the LDBC Social Network Benchmark Interactive workload. This is written from the dual perspective of participating in the benchmark design and of building the OpenLink Virtuoso implementation of same.
+
With two implementations of SNB interactive at four different scales, we can take a first look at what the benchmark is really about. The hallmark of a benchmark implementation is that its …
In a previous 3-part blog series we touched upon the difficulties of executing the LDBC SNB Interactive (SNB) workload, while achieving good performance and scalability. What we didn’t discuss is why these difficulties were unique to SNB, and what aspects of the way we perform workload execution are scientific contributions - novel solutions to previously unsolved problems. This post will highlight the differences between SNB and more …
As discussed in previous posts, one of the features that makes Datagen more realistic is the fact that the activity volume of the simulated Persons is not uniform, but forms spikes. In this blog entry I want to explain more in depth how this is actually implemented inside of the generator.
+
First of all, I start with a few basics of how Datagen works internally. In Datagen, once the person graph has been created (persons and their relationships), …
This blog entry is about one of the features of DATAGEN that makes it different from other synthetic graph generators that can be found in the literature: the community structure of the graph.
+
When generating synthetic graphs, one must not only pay attention to quantitative measures such as the number of nodes and edges, but also to other more qualitative characteristics such as the degree distribution, clustering coefficient. Real graphs, and …
The Linked Data paradigm has become the prominent enabler for sharing huge volumes of data using Semantic Web technologies, and has created novel challenges for non-relational data management systems, such as RDF and graph engines. Efficient data access through queries is perhaps the most important data management task, and is enabled through query optimization techniques, which amount to the discovery of optimal or close to optimal execution …
When talking about DATAGEN and other graph generators with social network characteristics, our attention is typically borrowed by the friendship subgraph and/or its structure. However, a social graph is more than a bunch of people being connected by friendship relations, but has a lot more of other things is worth to look at. With a quick view to commercial social networks like Facebook, Twitter or Google+, one can easily identify a lot of other …
The SNB Driver part 1 post introduced, broadly, the challenges faced when developing a workload driver for the LDBC SNB benchmark. In this blog we’ll drill down deeper into the details of what it means to execute “dependent queries” during benchmark execution, and how this is handled in the driver. First of all, as many driver-specific terms will be used, below is a listing of their definitions. There is no need to read them in …
Based on the dependencies certain operations have, and on the granularity of parallelism we wish to achieve while executing them, we assign a Dependency Mode and an Execution Mode to every operation type. Using these …
The Semantic Publishing Instance Matching Benchmark (SPIMBench) is a novel benchmark for the assessment of instance matching techniques for RDF data with an associated schema. SPIMBench extends the state-of-the art instance matching benchmarks for RDF data in three main aspects: it allows for systematic scalability testing, supports a wider range of test cases including semantics-aware ones, and provides an enriched gold standard.
We are presently working on the SNB BI workload. Andrey Gubichev of TU Munchen and myself are going through the queries and are playing with two SQL based implementations, one on Virtuoso and the other on Hyper.
+
As discussed before, the BI workload has the same choke points as TPC-H as a base but pushes further in terms of graphiness and query complexity.
+
There are obvious marketing applications for a SNB-like dataset. There are also security …
LDBC’s Semantic Publishing Benchmark (SPB) measures the performance of an RDF database in a load typical for metadata-based content publishing, such as the famous BBC Dynamic Semantic Publishing scenario. Such load combines tens of updates per second (e.g. adding metadata about new articles) with even higher volume of read requests (SPARQL queries collecting recent content and data to generate web page on a specific subject, e.g. Frank …
In previous posts (Getting started with snb, DATAGEN: data generation for the Social Network Benchmark), Arnau Prat discussed the main features and characteristics of DATAGEN: realism, scalability, determinism, usability. DATAGEN is the social network data generator used by the three LDBC-SNB workloads, which produces data simulating the activity in a social network site during a period of time. In this post, we conduct a series of experiments …
In this multi-part blog we consider the challenge of running the LDBC Social Network Interactive Benchmark (LDBC SNB) workload in parallel, i.e. the design of the workload driver that will issue the queries against the System Under Test (SUT). We go through design principles that were implemented for the LDBC SNB workload generator/load tester (simply referred to as driver). Software and documentation for this driver is available here: …
LDBCSPB (Semantic Publishing Benchmark) is based on the BBC linked data platform use case. Thus the data modelling and transaction mix reflects the BBC’s actual utilization of RDF. But a benchmark is not only a condensation of current best practices. The BBC linked data platform is an Ontotext Graph DB deployment. Graph DB was formerly known as Owlim.
+
So, in SPB we wanted to address substantially more complex queries than the lookups that …
The Semantic Publishing Benchmark (SPB), developed in the context of LDBC, aims at measuring the read and write operations that can be performed in the context of a media organisation. It simulates the management and consumption of RDF metadata describing media assets and creative works. The scenario is based around a media organisation that maintains RDF descriptions of its catalogue of creative works. These descriptions use a set of ontologies …
The Linked Data Benchmark Council (LDBC) mission is to design and maintain benchmarks for graph data management systems, and establish and enforce standards in running these benchmarks, and publish and arbitrate around the official benchmark results. The council and its https://ldbcouncil.org website just launched, and in its first 1.5 year of existence, most effort at LDBC has gone into investigating the needs of the field through interaction …
The Linked Data Benchmark Council (LDBC) is reaching a milestone today, June 23 2014, in announcing that two of the benchmarks that it has been developing since 1.5 years have now reached the status of Public Draft. This concerns the Semantic Publishing Benchmark (SPB) and the interactive workload of the Social Network Benchmark (SNB). In case of LDBC, the release is staged: now the benchmark software just runs read-only queries. This will be …
Social Network interaction is amongst the most natural and widely spread activities in the internet society, and it has turned out to be a very useful way for people to socialise at different levels (friendship, professional, hobby, etc.). As such, Social Networks are well understood from the point of view of the data involved and the interaction required by their actors. Thus, the concepts of friends of friends, or retweet are well established …
It is with great pleasure that we announce the new LDBC organisation site at www.ldbcouncil.org. The LDBC started as a European Community FP7 funded project with the objective to create, foster and become an industry reference for benchmarking RDF and Graph technologies. A period of more than one and a half years has led us to the creation of the first two workloads, the Semantic Publishing Benchmark and the Social Network Benchmark in its …
Following the 1st International workshop on Benchmarking RDF Systems (BeRSys 2013) the aim of the BeRSys 2014 workshop is to provide a discussion forum where researchers and industrials can meet to discuss topics related to the performance of RDF systems. BeRSys 2014 is the only workshop dedicated to benchmarking different aspects of RDF engines - in the line of TPCTC series of workshops.The focus of the workshop is to expose and initiate …
As explained in a previous post, the LDBC Social Network Benchmark (LDBC-SNB) has the objective to provide a realistic yet challenging workload, consisting of a social network and a set of queries. Both have to be realistic, easy to understand and easy to generate. This post has the objective to discuss the main features of DATAGEN, the social network data generator provided by LDBC-SNB, which is an evolution of S3G2 [1].
In a previous blog post titled “Is SNB like Facebook’s LinkBench?”, Peter Boncz discusses the design philosophy that shapes SNB and how it compares to other existing benchmarks such as LinkBench. In this post, I will briefly introduce the essential parts forming SNB, which are DATAGEN, the LDBC execution driver and the workloads.
+
DATAGEN
+
DATAGEN is the data generator used by all the workloads of SNB. Here we introduced the …
The LDBC Social Network Benchmark (SNB) is composed of three distinct workloads, interactive, business intelligence and graph analytics. This post introduces the interactive workload.
+
The benchmark measures the speed of queries of medium complexity against a social network being constantly updated. The queries are scoped to a user’s social environment and potentially access data associated with the friends or a user and their friends.
In this post, I will discuss in some detail the rationale and goals of the design of the Social Network Benchmark (SNB) and explain how it relates to real …
Synopsis: Now is the time to finalize the interactive part of the Social Network Benchmark (SNB). The benchmark must be both credible in a real social network setting and pose new challenges. There are many hard queries but not enough representation for what online systems in fact do. So, the workload mix must strike a balance between the practice and presenting new challenges.
+
It is about to be showtime for LDBC. The initial installment of the …
In previous posts (this and this) we briefly introduced the design goals and philosophy behind DATAGEN, the data generator used in LDBC-SNB. In this post, I will explain how to use DATAGEN to generate the necessary datatsets to run LDBC-SNB. Of course, as DATAGEN is continuously under development, the instructions given in this tutorial might change in the future.
+
Getting and Configuring Hadoop
+
DATAGEN runs on top of hadoop 1.2.1 to be scale. …
Note: consider this post as a continuation of the “Making it interactive” post by Orri Erling.
+
I have now completed the Virtuoso TPC-H work, including scale out. Optimization possibilities extend to infinity but the present level is good enough. TPC-H is the classic of all analytics benchmarks and is difficult enough, I have extensive commentary on this on my blog (In Hoc Signo Vinces series), including experimental results. This is, …
During the past six months we (the OWLIM Team at Ontotext) have integrated the LDBC Semantic Publishing Benchmark (LDBC-SPB) as a part of our development and release process.
+
First thing we’ve started using the LDBC-SPB for is to monitor the performance of our RDF Store when a new release is about to come out.
+
Initially we’ve decided to fix some of the benchmark parameters :
+
+
the dataset size - 50 million triples (LDBC-SPB50) * benchmark warmup …
The Financial Benchmark (FinBench) project defines a graph database benchmark targeting financial scenarios such as anti-fraud and risk control. It is maintained by the LDBC FinBench Task Force. The benchmark has one workload currently, Transaction Workload, capturing OLTP scenario with complex read queries that access the …
David Püroja just completed his MSc thesis on creating a design towards SNB Interactive v2 at CWI’s Database Architectures group. David and I gave a deep-dive talk at the FOSDEM conference’s graph developer room titled The LDBC Social Network …
LDBC SNB provides a data generator, which produces synthetic datasets, mimicking a social network’s activity during a period of time. Datagen is defined by the charasteristics of realism, scalability, determinism and usability. More than two years have elapsed since my last technical update on LDBC SNB Datagen, in which I discussed the reasons for moving the code to Apache Spark from the MapReduce-based Apache Hadoop implementation and the …
The Financial Benchmark (FinBench) project aims to define a graph database evaluating benchmark and develop a data generation process and a query driver to make the evaluation of the graph database representative, reliable and comparable, especially in financial scenarios, such as anti-fraud and risk control. The FinBench is scheduled to be released in the …
LDBC’s Social Network Benchmark [4] (LDBC SNB) is an industrial and academic initiative, formed by principal actors in the field of graph-like data management. Its goal is to define a framework where different graph-based technologies can be fairly tested and compared, that can drive the identification of systems’ bottlenecks and required functionalities, and can help researchers open new frontiers in high-performance graph data …
Apache Flink [1] is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink also builds batch processing on top of the streaming engine, overlaying native iteration support, managed memory, and program optimization.
The number of datasets published in the Web of Data as part of the Linked Data Cloud is constantly increasing. The Linked Data paradigm is based on the unconstrained publication of information by different publishers, and the interlinking of web resources through “same-as” links which specify that two URIs correspond to the same real world object. In the vast number of data sources participating in the Linked Data Cloud, this information is not …
fairly frequent short updates, with no update contention worth mentioning
+
+
+
short random lookups
+
+
+
medium complex queries centered around a person’s social environment
+
+
+
The updates exist so as to invalidate strategies that rely too heavily on precomputation. The short lookups exist for the sake of realism; after all, an …
Next 31st of May the GRADES workshop will take place in Melbourne within the ACM/SIGMOD presentation. GRADES started as an initiative of the Linked Data Benchmark Council in the SIGMOD/PODS 2013 held in New York.
+
Among the papers published in this edition we have “Graphalytics: A Big Data Benchmark for Graph-Processing Platforms”, which presents a new benchmark that uses the Social Network Benchmark data generator of LDBC (that can …
SNB Interactive is the wild frontier, with very few rules. This is necessary, among other reasons, because there is no standard property graph data model, and because the contestants support a broad mix of programming models, ranging from in-process APIs to declarative query.
+
In the case of Virtuoso, we have played with SQL and SPARQL implementations. For a fixed schema and well known workload, SQL will always win. The reason for this is that …
LDBC is presenting two papers at the next edition of the ACM SIGMOD/PODS conference held in Melbourne from May 31st to June 4th, 2015. The annual ACM SIGMOD/PODS conference is a leading international forum for database researchers, practitioners, developers, and users to explore cutting-edge ideas and results, and to exchange techniques, tools and experiences.
+
On the industry track, LDBC will be presenting the Social Network Benchmark Interactive …
This post is the first in a series of blogs analyzing the LDBC Social Network Benchmark Interactive workload. This is written from the dual perspective of participating in the benchmark design and of building the OpenLink Virtuoso implementation of same.
+
With two implementations of SNB interactive at four different scales, we can take a first look at what the benchmark is really about. The hallmark of a benchmark implementation is that its …
In a previous 3-part blog series we touched upon the difficulties of executing the LDBC SNB Interactive (SNB) workload, while achieving good performance and scalability. What we didn’t discuss is why these difficulties were unique to SNB, and what aspects of the way we perform workload execution are scientific contributions - novel solutions to previously unsolved problems. This post will highlight the differences between SNB and more …
As discussed in previous posts, one of the features that makes Datagen more realistic is the fact that the activity volume of the simulated Persons is not uniform, but forms spikes. In this blog entry I want to explain more in depth how this is actually implemented inside of the generator.
+
First of all, I start with a few basics of how Datagen works internally. In Datagen, once the person graph has been created (persons and their relationships), …
This blog entry is about one of the features of DATAGEN that makes it different from other synthetic graph generators that can be found in the literature: the community structure of the graph.
+
When generating synthetic graphs, one must not only pay attention to quantitative measures such as the number of nodes and edges, but also to other more qualitative characteristics such as the degree distribution, clustering coefficient. Real graphs, and …
The Linked Data paradigm has become the prominent enabler for sharing huge volumes of data using Semantic Web technologies, and has created novel challenges for non-relational data management systems, such as RDF and graph engines. Efficient data access through queries is perhaps the most important data management task, and is enabled through query optimization techniques, which amount to the discovery of optimal or close to optimal execution …
When talking about DATAGEN and other graph generators with social network characteristics, our attention is typically borrowed by the friendship subgraph and/or its structure. However, a social graph is more than a bunch of people being connected by friendship relations, but has a lot more of other things is worth to look at. With a quick view to commercial social networks like Facebook, Twitter or Google+, one can easily identify a lot of other …
The SNB Driver part 1 post introduced, broadly, the challenges faced when developing a workload driver for the LDBC SNB benchmark. In this blog we’ll drill down deeper into the details of what it means to execute “dependent queries” during benchmark execution, and how this is handled in the driver. First of all, as many driver-specific terms will be used, below is a listing of their definitions. There is no need to read them in …
Based on the dependencies certain operations have, and on the granularity of parallelism we wish to achieve while executing them, we assign a Dependency Mode and an Execution Mode to every operation type. Using these …
The Semantic Publishing Instance Matching Benchmark (SPIMBench) is a novel benchmark for the assessment of instance matching techniques for RDF data with an associated schema. SPIMBench extends the state-of-the art instance matching benchmarks for RDF data in three main aspects: it allows for systematic scalability testing, supports a wider range of test cases including semantics-aware ones, and provides an enriched gold standard.
We are presently working on the SNB BI workload. Andrey Gubichev of TU Munchen and myself are going through the queries and are playing with two SQL based implementations, one on Virtuoso and the other on Hyper.
+
As discussed before, the BI workload has the same choke points as TPC-H as a base but pushes further in terms of graphiness and query complexity.
+
There are obvious marketing applications for a SNB-like dataset. There are also security …
LDBC’s Semantic Publishing Benchmark (SPB) measures the performance of an RDF database in a load typical for metadata-based content publishing, such as the famous BBC Dynamic Semantic Publishing scenario. Such load combines tens of updates per second (e.g. adding metadata about new articles) with even higher volume of read requests (SPARQL queries collecting recent content and data to generate web page on a specific subject, e.g. Frank …
In previous posts (Getting started with snb, DATAGEN: data generation for the Social Network Benchmark), Arnau Prat discussed the main features and characteristics of DATAGEN: realism, scalability, determinism, usability. DATAGEN is the social network data generator used by the three LDBC-SNB workloads, which produces data simulating the activity in a social network site during a period of time. In this post, we conduct a series of experiments …
In this multi-part blog we consider the challenge of running the LDBC Social Network Interactive Benchmark (LDBC SNB) workload in parallel, i.e. the design of the workload driver that will issue the queries against the System Under Test (SUT). We go through design principles that were implemented for the LDBC SNB workload generator/load tester (simply referred to as driver). Software and documentation for this driver is available here: …
LDBCSPB (Semantic Publishing Benchmark) is based on the BBC linked data platform use case. Thus the data modelling and transaction mix reflects the BBC’s actual utilization of RDF. But a benchmark is not only a condensation of current best practices. The BBC linked data platform is an Ontotext Graph DB deployment. Graph DB was formerly known as Owlim.
+
So, in SPB we wanted to address substantially more complex queries than the lookups that …
The Semantic Publishing Benchmark (SPB), developed in the context of LDBC, aims at measuring the read and write operations that can be performed in the context of a media organisation. It simulates the management and consumption of RDF metadata describing media assets and creative works. The scenario is based around a media organisation that maintains RDF descriptions of its catalogue of creative works. These descriptions use a set of ontologies …
The Linked Data Benchmark Council (LDBC) mission is to design and maintain benchmarks for graph data management systems, and establish and enforce standards in running these benchmarks, and publish and arbitrate around the official benchmark results. The council and its https://ldbcouncil.org website just launched, and in its first 1.5 year of existence, most effort at LDBC has gone into investigating the needs of the field through interaction …
The Linked Data Benchmark Council (LDBC) is reaching a milestone today, June 23 2014, in announcing that two of the benchmarks that it has been developing since 1.5 years have now reached the status of Public Draft. This concerns the Semantic Publishing Benchmark (SPB) and the interactive workload of the Social Network Benchmark (SNB). In case of LDBC, the release is staged: now the benchmark software just runs read-only queries. This will be …
Social Network interaction is amongst the most natural and widely spread activities in the internet society, and it has turned out to be a very useful way for people to socialise at different levels (friendship, professional, hobby, etc.). As such, Social Networks are well understood from the point of view of the data involved and the interaction required by their actors. Thus, the concepts of friends of friends, or retweet are well established …
It is with great pleasure that we announce the new LDBC organisation site at www.ldbcouncil.org. The LDBC started as a European Community FP7 funded project with the objective to create, foster and become an industry reference for benchmarking RDF and Graph technologies. A period of more than one and a half years has led us to the creation of the first two workloads, the Semantic Publishing Benchmark and the Social Network Benchmark in its …
Following the 1st International workshop on Benchmarking RDF Systems (BeRSys 2013) the aim of the BeRSys 2014 workshop is to provide a discussion forum where researchers and industrials can meet to discuss topics related to the performance of RDF systems. BeRSys 2014 is the only workshop dedicated to benchmarking different aspects of RDF engines - in the line of TPCTC series of workshops.The focus of the workshop is to expose and initiate …
As explained in a previous post, the LDBC Social Network Benchmark (LDBC-SNB) has the objective to provide a realistic yet challenging workload, consisting of a social network and a set of queries. Both have to be realistic, easy to understand and easy to generate. This post has the objective to discuss the main features of DATAGEN, the social network data generator provided by LDBC-SNB, which is an evolution of S3G2 [1].
In a previous blog post titled “Is SNB like Facebook’s LinkBench?”, Peter Boncz discusses the design philosophy that shapes SNB and how it compares to other existing benchmarks such as LinkBench. In this post, I will briefly introduce the essential parts forming SNB, which are DATAGEN, the LDBC execution driver and the workloads.
+
DATAGEN
+
DATAGEN is the data generator used by all the workloads of SNB. Here we introduced the …
The LDBC Social Network Benchmark (SNB) is composed of three distinct workloads, interactive, business intelligence and graph analytics. This post introduces the interactive workload.
+
The benchmark measures the speed of queries of medium complexity against a social network being constantly updated. The queries are scoped to a user’s social environment and potentially access data associated with the friends or a user and their friends.
In this post, I will discuss in some detail the rationale and goals of the design of the Social Network Benchmark (SNB) and explain how it relates to real …
Synopsis: Now is the time to finalize the interactive part of the Social Network Benchmark (SNB). The benchmark must be both credible in a real social network setting and pose new challenges. There are many hard queries but not enough representation for what online systems in fact do. So, the workload mix must strike a balance between the practice and presenting new challenges.
+
It is about to be showtime for LDBC. The initial installment of the …
In previous posts (this and this) we briefly introduced the design goals and philosophy behind DATAGEN, the data generator used in LDBC-SNB. In this post, I will explain how to use DATAGEN to generate the necessary datatsets to run LDBC-SNB. Of course, as DATAGEN is continuously under development, the instructions given in this tutorial might change in the future.
+
Getting and Configuring Hadoop
+
DATAGEN runs on top of hadoop 1.2.1 to be scale. …
Note: consider this post as a continuation of the “Making it interactive” post by Orri Erling.
+
I have now completed the Virtuoso TPC-H work, including scale out. Optimization possibilities extend to infinity but the present level is good enough. TPC-H is the classic of all analytics benchmarks and is difficult enough, I have extensive commentary on this on my blog (In Hoc Signo Vinces series), including experimental results. This is, …
During the past six months we (the OWLIM Team at Ontotext) have integrated the LDBC Semantic Publishing Benchmark (LDBC-SPB) as a part of our development and release process.
+
First thing we’ve started using the LDBC-SPB for is to monitor the performance of our RDF Store when a new release is about to come out.
+
Initially we’ve decided to fix some of the benchmark parameters :
+
+
the dataset size - 50 million triples (LDBC-SPB50) * benchmark warmup …
', trigger: "hover focus", title: "", delay: 0, html: !(_e = { AUTO: "auto", TOP: "top", RIGHT: "right", BOTTOM: "bottom", LEFT: "left" }), selector: !(de = { animation: "boolean", template: "string", title: "(string|element|function)", trigger: "string", delay: "(number|object)", html: "boolean", selector: "(string|boolean)", placement: "(string|function)", offset: "(number|string)", container: "(string|element|boolean)", fallbackPlacement: "(string|array)", boundary: "(string|element)" }), placement: "top", offset: 0, container: !1, fallbackPlacement: "flip", boundary: "scrollParent" }, pe = "out", ve = { HIDE: "hide" + he, HIDDEN: "hidden" + he, SHOW: (me = "show") + he, SHOWN: "shown" + he, INSERTED: "inserted" + he, CLICK: "click" + he, FOCUSIN: "focusin" + he, FOCUSOUT: "focusout" + he, MOUSEENTER: "mouseenter" + he, MOUSELEAVE: "mouseleave" + he }, Ee = "fade", ye = "show", Te = ".tooltip-inner", Ce = ".arrow", Ie = "hover", Ae = "focus", De = "click", be = "manual", Se = function () { function i(t, e) { if ("undefined" == typeof c) throw new TypeError("Bootstrap tooltips require Popper.js (https://popper.js.org)"); this._isEnabled = !0, this._timeout = 0, this._hoverState = "", this._activeTrigger = {}, this._popper = null, this.element = t, this.config = this._getConfig(e), this.tip = null, this._setListeners() } var t = i.prototype; return t.enable = function () { this._isEnabled = !0 }, t.disable = function () { this._isEnabled = !1 }, t.toggleEnabled = function () { this._isEnabled = !this._isEnabled }, t.toggle = function (t) { if (this._isEnabled) if (t) { var e = this.constructor.DATA_KEY, n = oe(t.currentTarget).data(e); n || (n = new this.constructor(t.currentTarget, this._getDelegateConfig()), oe(t.currentTarget).data(e, n)), n._activeTrigger.click = !n._activeTrigger.click, n._isWithActiveTrigger() ? n._enter(null, n) : n._leave(null, n) } else { if (oe(this.getTipElement()).hasClass(ye)) return void this._leave(null, this); this._enter(null, this) } }, t.dispose = function () { clearTimeout(this._timeout), oe.removeData(this.element, this.constructor.DATA_KEY), oe(this.element).off(this.constructor.EVENT_KEY), oe(this.element).closest(".modal").off("hide.bs.modal"), this.tip && oe(this.tip).remove(), this._isEnabled = null, this._timeout = null, this._hoverState = null, (this._activeTrigger = null) !== this._popper && this._popper.destroy(), this._popper = null, this.element = null, this.config = null, this.tip = null }, t.show = function () { var e = this; if ("none" === oe(this.element).css("display")) throw new Error("Please use show on visible elements"); var t = oe.Event(this.constructor.Event.SHOW); if (this.isWithContent() && this._isEnabled) { oe(this.element).trigger(t); var n = oe.contains(this.element.ownerDocument.documentElement, this.element); if (t.isDefaultPrevented() || !n) return; var i = this.getTipElement(), r = Cn.getUID(this.constructor.NAME); i.setAttribute("id", r), this.element.setAttribute("aria-describedby", r), this.setContent(), this.config.animation && oe(i).addClass(Ee); var s = "function" == typeof this.config.placement ? this.config.placement.call(this, i, this.element) : this.config.placement, o = this._getAttachment(s); this.addAttachmentClass(o); var a = !1 === this.config.container ? document.body : oe(this.config.container); oe(i).data(this.constructor.DATA_KEY, this), oe.contains(this.element.ownerDocument.documentElement, this.tip) || oe(i).appendTo(a), oe(this.element).trigger(this.constructor.Event.INSERTED), this._popper = new c(this.element, i, { placement: o, modifiers: { offset: { offset: this.config.offset }, flip: { behavior: this.config.fallbackPlacement }, arrow: { element: Ce }, preventOverflow: { boundariesElement: this.config.boundary } }, onCreate: function (t) { t.originalPlacement !== t.placement && e._handlePopperPlacementChange(t) }, onUpdate: function (t) { e._handlePopperPlacementChange(t) } }), oe(i).addClass(ye), "ontouchstart" in document.documentElement && oe(document.body).children().on("mouseover", null, oe.noop); var l = function () { e.config.animation && e._fixTransition(); var t = e._hoverState; e._hoverState = null, oe(e.element).trigger(e.constructor.Event.SHOWN), t === pe && e._leave(null, e) }; if (oe(this.tip).hasClass(Ee)) { var h = Cn.getTransitionDurationFromElement(this.tip); oe(this.tip).one(Cn.TRANSITION_END, l).emulateTransitionEnd(h) } else l() } }, t.hide = function (t) { var e = this, n = this.getTipElement(), i = oe.Event(this.constructor.Event.HIDE), r = function () { e._hoverState !== me && n.parentNode && n.parentNode.removeChild(n), e._cleanTipClass(), e.element.removeAttribute("aria-describedby"), oe(e.element).trigger(e.constructor.Event.HIDDEN), null !== e._popper && e._popper.destroy(), t && t() }; if (oe(this.element).trigger(i), !i.isDefaultPrevented()) { if (oe(n).removeClass(ye), "ontouchstart" in document.documentElement && oe(document.body).children().off("mouseover", null, oe.noop), this._activeTrigger[De] = !1, this._activeTrigger[Ae] = !1, this._activeTrigger[Ie] = !1, oe(this.tip).hasClass(Ee)) { var s = Cn.getTransitionDurationFromElement(n); oe(n).one(Cn.TRANSITION_END, r).emulateTransitionEnd(s) } else r(); this._hoverState = "" } }, t.update = function () { null !== this._popper && this._popper.scheduleUpdate() }, t.isWithContent = function () { return Boolean(this.getTitle()) }, t.addAttachmentClass = function (t) { oe(this.getTipElement()).addClass(ue + "-" + t) }, t.getTipElement = function () { return this.tip = this.tip || oe(this.config.template)[0], this.tip }, t.setContent = function () { var t = oe(this.getTipElement()); this.setElementContent(t.find(Te), this.getTitle()), t.removeClass(Ee + " " + ye) }, t.setElementContent = function (t, e) { var n = this.config.html; "object" == typeof e && (e.nodeType || e.jquery) ? n ? oe(e).parent().is(t) || t.empty().append(e) : t.text(oe(e).text()) : t[n ? "html" : "text"](e) }, t.getTitle = function () { var t = this.element.getAttribute("data-original-title"); return t || (t = "function" == typeof this.config.title ? this.config.title.call(this.element) : this.config.title), t }, t._getAttachment = function (t) { return _e[t.toUpperCase()] }, t._setListeners = function () { var i = this; this.config.trigger.split(" ").forEach(function (t) { if ("click" === t) oe(i.element).on(i.constructor.Event.CLICK, i.config.selector, function (t) { return i.toggle(t) }); else if (t !== be) { var e = t === Ie ? i.constructor.Event.MOUSEENTER : i.constructor.Event.FOCUSIN, n = t === Ie ? i.constructor.Event.MOUSELEAVE : i.constructor.Event.FOCUSOUT; oe(i.element).on(e, i.config.selector, function (t) { return i._enter(t) }).on(n, i.config.selector, function (t) { return i._leave(t) }) } oe(i.element).closest(".modal").on("hide.bs.modal", function () { return i.hide() }) }), this.config.selector ? this.config = h({}, this.config, { trigger: "manual", selector: "" }) : this._fixTitle() }, t._fixTitle = function () { var t = typeof this.element.getAttribute("data-original-title"); (this.element.getAttribute("title") || "string" !== t) && (this.element.setAttribute("data-original-title", this.element.getAttribute("title") || ""), this.element.setAttribute("title", "")) }, t._enter = function (t, e) { var n = this.constructor.DATA_KEY; (e = e || oe(t.currentTarget).data(n)) || (e = new this.constructor(t.currentTarget, this._getDelegateConfig()), oe(t.currentTarget).data(n, e)), t && (e._activeTrigger["focusin" === t.type ? Ae : Ie] = !0), oe(e.getTipElement()).hasClass(ye) || e._hoverState === me ? e._hoverState = me : (clearTimeout(e._timeout), e._hoverState = me, e.config.delay && e.config.delay.show ? e._timeout = setTimeout(function () { e._hoverState === me && e.show() }, e.config.delay.show) : e.show()) }, t._leave = function (t, e) { var n = this.constructor.DATA_KEY; (e = e || oe(t.currentTarget).data(n)) || (e = new this.constructor(t.currentTarget, this._getDelegateConfig()), oe(t.currentTarget).data(n, e)), t && (e._activeTrigger["focusout" === t.type ? Ae : Ie] = !1), e._isWithActiveTrigger() || (clearTimeout(e._timeout), e._hoverState = pe, e.config.delay && e.config.delay.hide ? e._timeout = setTimeout(function () { e._hoverState === pe && e.hide() }, e.config.delay.hide) : e.hide()) }, t._isWithActiveTrigger = function () { for (var t in this._activeTrigger) if (this._activeTrigger[t]) return !0; return !1 }, t._getConfig = function (t) { return "number" == typeof (t = h({}, this.constructor.Default, oe(this.element).data(), "object" == typeof t && t ? t : {})).delay && (t.delay = { show: t.delay, hide: t.delay }), "number" == typeof t.title && (t.title = t.title.toString()), "number" == typeof t.content && (t.content = t.content.toString()), Cn.typeCheckConfig(ae, t, this.constructor.DefaultType), t }, t._getDelegateConfig = function () { var t = {}; if (this.config) for (var e in this.config) this.constructor.Default[e] !== this.config[e] && (t[e] = this.config[e]); return t }, t._cleanTipClass = function () { var t = oe(this.getTipElement()), e = t.attr("class").match(fe); null !== e && 0 < e.length && t.removeClass(e.join("")) }, t._handlePopperPlacementChange = function (t) { this._cleanTipClass(), this.addAttachmentClass(this._getAttachment(t.placement)) }, t._fixTransition = function () { var t = this.getTipElement(), e = this.config.animation; null === t.getAttribute("x-placement") && (oe(t).removeClass(Ee), this.config.animation = !1, this.hide(), this.show(), this.config.animation = e) }, i._jQueryInterface = function (n) { return this.each(function () { var t = oe(this).data(le), e = "object" == typeof n && n; if ((t || !/dispose|hide/.test(n)) && (t || (t = new i(this, e), oe(this).data(le, t)), "string" == typeof n)) { if ("undefined" == typeof t[n]) throw new TypeError('No method named "' + n + '"'); t[n]() } }) }, o(i, null, [{ key: "VERSION", get: function () { return "4.1.1" } }, { key: "Default", get: function () { return ge } }, { key: "NAME", get: function () { return ae } }, { key: "DATA_KEY", get: function () { return le } }, { key: "Event", get: function () { return ve } }, { key: "EVENT_KEY", get: function () { return he } }, { key: "DefaultType", get: function () { return de } }]), i }(), oe.fn[ae] = Se._jQueryInterface, oe.fn[ae].Constructor = Se, oe.fn[ae].noConflict = function () { return oe.fn[ae] = ce, Se._jQueryInterface }, Se), On = (Ne = "popover", ke = "." + (Oe = "bs.popover"), Pe = (we = e).fn[Ne], Le = "bs-popover", je = new RegExp("(^|\\s)" + Le + "\\S+", "g"), Re = h({}, Nn.Default, { placement: "right", trigger: "click", content: "", template: '
' }), He = h({}, Nn.DefaultType, { content: "(string|element|function)" }), We = "fade", xe = ".popover-header", Ue = ".popover-body", Ke = { HIDE: "hide" + ke, HIDDEN: "hidden" + ke, SHOW: (Me = "show") + ke, SHOWN: "shown" + ke, INSERTED: "inserted" + ke, CLICK: "click" + ke, FOCUSIN: "focusin" + ke, FOCUSOUT: "focusout" + ke, MOUSEENTER: "mouseenter" + ke, MOUSELEAVE: "mouseleave" + ke }, Fe = function (t) { var e, n; function i() { return t.apply(this, arguments) || this } n = t, (e = i).prototype = Object.create(n.prototype), (e.prototype.constructor = e).__proto__ = n; var r = i.prototype; return r.isWithContent = function () { return this.getTitle() || this._getContent() }, r.addAttachmentClass = function (t) { we(this.getTipElement()).addClass(Le + "-" + t) }, r.getTipElement = function () { return this.tip = this.tip || we(this.config.template)[0], this.tip }, r.setContent = function () { var t = we(this.getTipElement()); this.setElementContent(t.find(xe), this.getTitle()); var e = this._getContent(); "function" == typeof e && (e = e.call(this.element)), this.setElementContent(t.find(Ue), e), t.removeClass(We + " " + Me) }, r._getContent = function () { return this.element.getAttribute("data-content") || this.config.content }, r._cleanTipClass = function () { var t = we(this.getTipElement()), e = t.attr("class").match(je); null !== e && 0 < e.length && t.removeClass(e.join("")) }, i._jQueryInterface = function (n) { return this.each(function () { var t = we(this).data(Oe), e = "object" == typeof n ? n : null; if ((t || !/destroy|hide/.test(n)) && (t || (t = new i(this, e), we(this).data(Oe, t)), "string" == typeof n)) { if ("undefined" == typeof t[n]) throw new TypeError('No method named "' + n + '"'); t[n]() } }) }, o(i, null, [{ key: "VERSION", get: function () { return "4.1.1" } }, { key: "Default", get: function () { return Re } }, { key: "NAME", get: function () { return Ne } }, { key: "DATA_KEY", get: function () { return Oe } }, { key: "Event", get: function () { return Ke } }, { key: "EVENT_KEY", get: function () { return ke } }, { key: "DefaultType", get: function () { return He } }]), i }(Nn), we.fn[Ne] = Fe._jQueryInterface, we.fn[Ne].Constructor = Fe, we.fn[Ne].noConflict = function () { return we.fn[Ne] = Pe, Fe._jQueryInterface }, Fe), kn = (Qe = "scrollspy", Ye = "." + (Be = "bs.scrollspy"), Ge = (Ve = e).fn[Qe], qe = { offset: 10, method: "auto", target: "" }, ze = { offset: "number", method: "string", target: "(string|element)" }, Xe = { ACTIVATE: "activate" + Ye, SCROLL: "scroll" + Ye, LOAD_DATA_API: "load" + Ye + ".data-api" }, Je = "dropdown-item", Ze = "active", $e = { DATA_SPY: '[data-spy="scroll"]', ACTIVE: ".active", NAV_LIST_GROUP: ".nav, .list-group", NAV_LINKS: ".nav-link", NAV_ITEMS: ".nav-item", LIST_ITEMS: ".list-group-item", DROPDOWN: ".dropdown", DROPDOWN_ITEMS: ".dropdown-item", DROPDOWN_TOGGLE: ".dropdown-toggle" }, tn = "offset", en = "position", nn = function () { function n(t, e) { var n = this; this._element = t, this._scrollElement = "BODY" === t.tagName ? window : t, this._config = this._getConfig(e), this._selector = this._config.target + " " + $e.NAV_LINKS + "," + this._config.target + " " + $e.LIST_ITEMS + "," + this._config.target + " " + $e.DROPDOWN_ITEMS, this._offsets = [], this._targets = [], this._activeTarget = null, this._scrollHeight = 0, Ve(this._scrollElement).on(Xe.SCROLL, function (t) { return n._process(t) }), this.refresh(), this._process() } var t = n.prototype; return t.refresh = function () { var e = this, t = this._scrollElement === this._scrollElement.window ? tn : en, r = "auto" === this._config.method ? t : this._config.method, s = r === en ? this._getScrollTop() : 0; this._offsets = [], this._targets = [], this._scrollHeight = this._getScrollHeight(), Ve.makeArray(Ve(this._selector)).map(function (t) { var e, n = Cn.getSelectorFromElement(t); if (n && (e = Ve(n)[0]), e) { var i = e.getBoundingClientRect(); if (i.width || i.height) return [Ve(e)[r]().top + s, n] } return null }).filter(function (t) { return t }).sort(function (t, e) { return t[0] - e[0] }).forEach(function (t) { e._offsets.push(t[0]), e._targets.push(t[1]) }) }, t.dispose = function () { Ve.removeData(this._element, Be), Ve(this._scrollElement).off(Ye), this._element = null, this._scrollElement = null, this._config = null, this._selector = null, this._offsets = null, this._targets = null, this._activeTarget = null, this._scrollHeight = null }, t._getConfig = function (t) { if ("string" != typeof (t = h({}, qe, "object" == typeof t && t ? t : {})).target) { var e = Ve(t.target).attr("id"); e || (e = Cn.getUID(Qe), Ve(t.target).attr("id", e)), t.target = "#" + e } return Cn.typeCheckConfig(Qe, t, ze), t }, t._getScrollTop = function () { return this._scrollElement === window ? this._scrollElement.pageYOffset : this._scrollElement.scrollTop }, t._getScrollHeight = function () { return this._scrollElement.scrollHeight || Math.max(document.body.scrollHeight, document.documentElement.scrollHeight) }, t._getOffsetHeight = function () { return this._scrollElement === window ? window.innerHeight : this._scrollElement.getBoundingClientRect().height }, t._process = function () { var t = this._getScrollTop() + this._config.offset, e = this._getScrollHeight(), n = this._config.offset + e - this._getOffsetHeight(); if (this._scrollHeight !== e && this.refresh(), n <= t) { var i = this._targets[this._targets.length - 1]; this._activeTarget !== i && this._activate(i) } else { if (this._activeTarget && t < this._offsets[0] && 0 < this._offsets[0]) return this._activeTarget = null, void this._clear(); for (var r = this._offsets.length; r--;) { this._activeTarget !== this._targets[r] && t >= this._offsets[r] && ("undefined" == typeof this._offsets[r + 1] || t < this._offsets[r + 1]) && this._activate(this._targets[r]) } } }, t._activate = function (e) { this._activeTarget = e, this._clear(); var t = this._selector.split(","); t = t.map(function (t) { return t + '[data-target="' + e + '"],' + t + '[href="' + e + '"]' }); var n = Ve(t.join(",")); n.hasClass(Je) ? (n.closest($e.DROPDOWN).find($e.DROPDOWN_TOGGLE).addClass(Ze), n.addClass(Ze)) : (n.addClass(Ze), n.parents($e.NAV_LIST_GROUP).prev($e.NAV_LINKS + ", " + $e.LIST_ITEMS).addClass(Ze), n.parents($e.NAV_LIST_GROUP).prev($e.NAV_ITEMS).children($e.NAV_LINKS).addClass(Ze)), Ve(this._scrollElement).trigger(Xe.ACTIVATE, { relatedTarget: e }) }, t._clear = function () { Ve(this._selector).filter($e.ACTIVE).removeClass(Ze) }, n._jQueryInterface = function (e) { return this.each(function () { var t = Ve(this).data(Be); if (t || (t = new n(this, "object" == typeof e && e), Ve(this).data(Be, t)), "string" == typeof e) { if ("undefined" == typeof t[e]) throw new TypeError('No method named "' + e + '"'); t[e]() } }) }, o(n, null, [{ key: "VERSION", get: function () { return "4.1.1" } }, { key: "Default", get: function () { return qe } }]), n }(), Ve(window).on(Xe.LOAD_DATA_API, function () { for (var t = Ve.makeArray(Ve($e.DATA_SPY)), e = t.length; e--;) { var n = Ve(t[e]); nn._jQueryInterface.call(n, n.data()) } }), Ve.fn[Qe] = nn._jQueryInterface, Ve.fn[Qe].Constructor = nn, Ve.fn[Qe].noConflict = function () { return Ve.fn[Qe] = Ge, nn._jQueryInterface }, nn), Pn = (on = "." + (sn = "bs.tab"), an = (rn = e).fn.tab, ln = { HIDE: "hide" + on, HIDDEN: "hidden" + on, SHOW: "show" + on, SHOWN: "shown" + on, CLICK_DATA_API: "click" + on + ".data-api" }, hn = "dropdown-menu", cn = "active", un = "disabled", fn = "fade", dn = "show", _n = ".dropdown", gn = ".nav, .list-group", mn = ".active", pn = "> li > .active", vn = '[data-toggle="tab"], [data-toggle="pill"], [data-toggle="list"]', En = ".dropdown-toggle", yn = "> .dropdown-menu .active", Tn = function () { function i(t) { this._element = t } var t = i.prototype; return t.show = function () { var n = this; if (!(this._element.parentNode && this._element.parentNode.nodeType === Node.ELEMENT_NODE && rn(this._element).hasClass(cn) || rn(this._element).hasClass(un))) { var t, i, e = rn(this._element).closest(gn)[0], r = Cn.getSelectorFromElement(this._element); if (e) { var s = "UL" === e.nodeName ? pn : mn; i = (i = rn.makeArray(rn(e).find(s)))[i.length - 1] } var o = rn.Event(ln.HIDE, { relatedTarget: this._element }), a = rn.Event(ln.SHOW, { relatedTarget: i }); if (i && rn(i).trigger(o), rn(this._element).trigger(a), !a.isDefaultPrevented() && !o.isDefaultPrevented()) { r && (t = rn(r)[0]), this._activate(this._element, e); var l = function () { var t = rn.Event(ln.HIDDEN, { relatedTarget: n._element }), e = rn.Event(ln.SHOWN, { relatedTarget: i }); rn(i).trigger(t), rn(n._element).trigger(e) }; t ? this._activate(t, t.parentNode, l) : l() } } }, t.dispose = function () { rn.removeData(this._element, sn), this._element = null }, t._activate = function (t, e, n) { var i = this, r = ("UL" === e.nodeName ? rn(e).find(pn) : rn(e).children(mn))[0], s = n && r && rn(r).hasClass(fn), o = function () { return i._transitionComplete(t, r, n) }; if (r && s) { var a = Cn.getTransitionDurationFromElement(r); rn(r).one(Cn.TRANSITION_END, o).emulateTransitionEnd(a) } else o() }, t._transitionComplete = function (t, e, n) { if (e) { rn(e).removeClass(dn + " " + cn); var i = rn(e.parentNode).find(yn)[0]; i && rn(i).removeClass(cn), "tab" === e.getAttribute("role") && e.setAttribute("aria-selected", !1) } if (rn(t).addClass(cn), "tab" === t.getAttribute("role") && t.setAttribute("aria-selected", !0), Cn.reflow(t), rn(t).addClass(dn), t.parentNode && rn(t.parentNode).hasClass(hn)) { var r = rn(t).closest(_n)[0]; r && rn(r).find(En).addClass(cn), t.setAttribute("aria-expanded", !0) } n && n() }, i._jQueryInterface = function (n) { return this.each(function () { var t = rn(this), e = t.data(sn); if (e || (e = new i(this), t.data(sn, e)), "string" == typeof n) { if ("undefined" == typeof e[n]) throw new TypeError('No method named "' + n + '"'); e[n]() } }) }, o(i, null, [{ key: "VERSION", get: function () { return "4.1.1" } }]), i }(), rn(document).on(ln.CLICK_DATA_API, vn, function (t) { t.preventDefault(), Tn._jQueryInterface.call(rn(this), "show") }), rn.fn.tab = Tn._jQueryInterface, rn.fn.tab.Constructor = Tn, rn.fn.tab.noConflict = function () { return rn.fn.tab = an, Tn._jQueryInterface }, Tn); !function (t) { if ("undefined" == typeof t) throw new TypeError("Bootstrap's JavaScript requires jQuery. jQuery must be included before Bootstrap's JavaScript."); var e = t.fn.jquery.split(" ")[0].split("."); if (e[0] < 2 && e[1] < 9 || 1 === e[0] && 9 === e[1] && e[2] < 1 || 4 <= e[0]) throw new Error("Bootstrap's JavaScript requires at least jQuery v1.9.1 but less than v4.0.0") }(e), t.Util = Cn, t.Alert = In, t.Button = An, t.Carousel = Dn, t.Collapse = bn, t.Dropdown = Sn, t.Modal = wn, t.Popover = On, t.Scrollspy = kn, t.Tab = Pn, t.Tooltip = Nn, Object.defineProperty(t, "__esModule", { value: !0 }) });
+//# sourceMappingURL=bootstrap.min.js.map
\ No newline at end of file
diff --git a/plugins/filterizr/jquery.filterizr.min.js b/plugins/filterizr/jquery.filterizr.min.js
new file mode 100644
index 00000000..af5c9e60
--- /dev/null
+++ b/plugins/filterizr/jquery.filterizr.min.js
@@ -0,0 +1 @@
+!function(t){function e(i){if(r[i])return r[i].exports;var n=r[i]={i:i,l:!1,exports:{}};return t[i].call(n.exports,n,n.exports,e),n.l=!0,n.exports}var r={};e.m=t,e.c=r,e.d=function(t,r,i){e.o(t,r)||Object.defineProperty(t,r,{configurable:!1,enumerable:!0,get:i})},e.n=function(t){var r=t&&t.__esModule?function(){return t.default}:function(){return t};return e.d(r,"a",r),r},e.o=function(t,e){return Object.prototype.hasOwnProperty.call(t,e)},e.p="",e(e.s=4)}([function(t,e,r){"use strict";r.d(e,"b",function(){return n}),r.d(e,"h",function(){return o}),r.d(e,"i",function(){return s}),r.d(e,"g",function(){return a}),r.d(e,"e",function(){return l}),r.d(e,"j",function(){return u}),r.d(e,"f",function(){return c}),r.d(e,"k",function(){return f}),r.d(e,"c",function(){return p}),r.d(e,"d",function(){return h}),r.d(e,"l",function(){return d}),r.d(e,"a",function(){return v});var i="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t},n=function(t,e){for(var r=0;rn?1:0}}(e))},p=function(t,e,r,n,o){if(void 0!==e){var s=new Error('Filterizr: expected type of option "'+t+'" to be "'+r+'", but its type is: "'+(void 0===e?"undefined":i(e))+'"'),a=!1,l=!1,u=r.includes("array");if((void 0===e?"undefined":i(e)).match(r)?a=!0:!a&&u&&(l=Array.isArray(e)),!a&&!u)throw s;if(!a&&u&&!l)throw s;var c=function(t){return t?" For further help read here: "+t:""};if(Array.isArray(n)){var f=!1;if(n.forEach(function(t){t===e&&(f=!0)}),!f)throw new Error('Filterizr: allowed values for option "'+t+'" are: '+n.map(function(t){return'"'+t+'"'}).join(", ")+'. Value received: "'+e+'".'+c(o))}else if(n instanceof RegExp){var p=e.match(n);if(!p)throw new Error('Filterizr: invalid value "'+e+'" for option "'+t+'" received.'+c(o))}}},h=/(^linear$)|(^ease-in-out$)|(^ease-in$)|(^ease-out$)|(^ease$)|(^step-start$)|(^step-end$)|(^steps\(\d\s*,\s*(end|start)\))$|(^cubic-bezier\((\d*\.*\d+)\s*,\s*(\d*\.*\d+)\s*,\s*(\d*\.*\d+)\s*,\s*(\d*\.*\d+)\))$/,d="\n webkitTransitionEnd.Filterizr \n otransitionend.Filterizr \n oTransitionEnd.Filterizr \n msTransitionEnd.Filterizr \n transitionend.Filterizr\n",v={IDLE:"IDLE",FILTERING:"FILTERING",SORTING:"SORTING",SHUFFLING:"SHUFFLING"}},function(t,e,r){"use strict";function i(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")}var n=r(2),o=r(0),s=function(){function t(t,e){for(var r=0;r0&&void 0!==arguments[0]?arguments[0]:".filtr-container",r=arguments[1];i(this,t),this.$node=a(e),this.$node.css({padding:0,position:"relative",width:"100%",display:"flex","flex-wrap":"wrap"}),this.props={FilterItems:this.getFilterItems(r),w:this.getWidth(),h:0},this.updateFilterItemsDimensions()}return s(t,[{key:"destroy",value:function(){this.$node.attr("style","").find(".filtr-item").attr("style",""),this.unbindEvents()}},{key:"getFilterItems",value:function(t){return a.map(this.$node.find(".filtr-item"),function(e,r){return new n.a(a(e),r,t)})}},{key:"push",value:function(t,e){var r=this.props.FilterItems;this.$node.append(t);var i=r.length,o=new n.a(t,i,e);this.props.FilterItems.push(o)}},{key:"calcColumns",value:function(){return Math.round(this.props.w/this.props.FilterItems[0].props.w)}},{key:"updateFilterItemsTransitionStyle",value:function(t,e,r,i){this.props.FilterItems.forEach(function(n){return n.$node.css({transition:"all "+t+"s "+e+" "+n.calcDelay(r,i)+"ms"})})}},{key:"updateHeight",value:function(t){this.props.h=t,this.$node.css("height",t)}},{key:"updateWidth",value:function(){this.props.w=this.getWidth()}},{key:"updateFilterItemsDimensions",value:function(){this.props.FilterItems.forEach(function(t){return t.updateDimensions()})}},{key:"getWidth",value:function(){return this.$node.innerWidth()}},{key:"bindTransitionEnd",value:function(t,e){this.$node.on(o.l,Object(o.e)(function(){t()},e))}},{key:"bindEvents",value:function(t){this.$node.on("filteringStart.Filterizr",t.onFilteringStart),this.$node.on("filteringEnd.Filterizr",t.onFilteringEnd),this.$node.on("shufflingStart.Filterizr",t.onShufflingStart),this.$node.on("shufflingEnd.Filterizr",t.onShufflingEnd),this.$node.on("sortingStart.Filterizr",t.onSortingStart),this.$node.on("sortingEnd.Filterizr",t.onSortingEnd)}},{key:"unbindEvents",value:function(){this.$node.off(o.l+"\n filteringStart.Filterizr \n filteringEnd.Filterizr \n shufflingStart.Filterizr \n shufflingEnd.Filterizr \n sortingStart.Filterizr \n sortingEnd.Filterizr")}},{key:"trigger",value:function(t){this.$node.trigger(t)}}]),t}();e.a=l},function(t,e,r){"use strict";function i(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")}var n=r(0),o=function(){function t(t,e){for(var r=0;r0?r[0]:a.a;this._fltr=new n.a(e,o)}else if(r.length>=1&&"string"==typeof r[0]){var s=r[0],l=Array.prototype.slice.call(r,1),c=this._fltr;switch(s){case"filter":return c.filter.apply(c,i(l)),this;case"insertItem":return c.insertItem.apply(c,i(l)),this;case"toggleFilter":return c.toggleFilter.apply(c,i(l)),this;case"sort":return c.sort.apply(c,i(l)),this;case"shuffle":return c.shuffle.apply(c,i(l)),this;case"search":return c.search.apply(c,i(l)),this;case"setOptions":return c.setOptions.apply(c,i(l)),this;case"destroy":return c.destroy.apply(c,i(l)),delete this._fltr,this;default:throw new Error("Filterizr: "+s+" is not part of the Filterizr API. Please refer to the docs for more information.")}}return this}}(f),e.default=c,r(16)},function(t,e,r){"use strict";function i(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")}var n=r(7),o=r(1),s=r(8),a=r(3),l=r(0),u=function(){function t(t,e){for(var r=0;r0&&void 0!==arguments[0]?arguments[0]:".filtr-container",r=arguments[1];i(this,t),this.options=Object(l.i)(a.a,r);var s=new o.a(e,this.options);if(!s.$node.length)throw new Error("Filterizr: could not find a container with the selector "+e+", to initialize Filterizr.");new n.a(this,this.options.controlsSelector),this.props={filterizrState:l.a.IDLE,searchTerm:"",sort:"index",sortOrder:"asc",FilterContainer:s,FilterItems:s.props.FilterItems,FilteredItems:[]},this.bindEvents(),this.filter(this.options.filter)}return u(t,[{key:"filter",value:function(t){var e=this.props,r=e.searchTerm,i=e.FilterContainer,n=e.FilterItems;i.trigger("filteringStart"),this.props.filterizrState=l.a.FILTERING,t=Array.isArray(t)?t.map(function(t){return t.toString()}):t.toString();var o=this.searchFilterItems(this.filterFilterItems(n,t),r);this.props.FilteredItems=o,this.render(o)}},{key:"destroy",value:function(){var t=this.props.FilterContainer,e=this.options.controlsSelector;t.destroy(),c(window).off("resize.Filterizr"),c(e+"[data-filter]").off("click.Filterizr"),c(e+"[data-multifilter]").off("click.Filterizr"),c(e+"[data-shuffle]").off("click.Filterizr"),c(e+"[data-search]").off("keyup.Filterizr"),c(e+"[data-sortAsc]").off("click.Filterizr"),c(e+"[data-sortDesc]").off("click.Filterizr")}},{key:"insertItem",value:function(t){var e=this.props.FilterContainer,r=t.clone().attr("style","");e.push(r,this.options);var i=this.filterFilterItems(this.props.FilterItems,this.options.filter);this.render(i)}},{key:"sort",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:"index",e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"asc",r=this.props,i=r.FilterContainer,n=r.FilterItems;i.trigger("sortingStart"),this.props.filterizrState=l.a.SORTING,this.props.FilterItems=this.sortFilterItems(n,t,e);var o=this.filterFilterItems(this.props.FilterItems,this.options.filter);this.props.FilteredItems=o,this.render(o)}},{key:"search",value:function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:this.props.searchTerm,e=this.props.FilterItems,r=this.searchFilterItems(this.filterFilterItems(e,this.options.filter),t);this.props.FilteredItems=r,this.render(r)}},{key:"shuffle",value:function(){var t=this.props,e=t.FilterContainer,r=t.FilteredItems;e.trigger("shufflingStart"),this.props.filterizrState=l.a.SHUFFLING;var i=this.shuffleFilterItems(r);this.props.FilteredItems=i,this.render(i)}},{key:"setOptions",value:function(t){Object(l.c)("animationDuration",t.animationDuration,"number"),Object(l.c)("callbacks",t.callbacks,"object"),Object(l.c)("controlsSelector",t.controlsSelector,"string"),Object(l.c)("delay",t.delay,"number"),Object(l.c)("easing",t.easing,"string",l.d,"https://www.w3schools.com/cssref/css3_pr_transition-timing-function.asp"),Object(l.c)("delayMode",t.delayMode,"string",["progressive","alternate"]),Object(l.c)("filter",t.filter,"string|number|array"),Object(l.c)("filterOutCss",t.filterOutCss,"object"),Object(l.c)("filterInCss",t.filterOutCss,"object"),Object(l.c)("layout",t.layout,"string",["sameSize","vertical","horizontal","sameHeight","sameWidth","packed"]),Object(l.c)("multifilterLogicalOperator",t.multifilterLogicalOperator,"string",["and","or"]),Object(l.c)("setupControls",t.setupControls,"boolean"),this.options=Object(l.i)(this.options,t),(t.animationDuration||t.delay||t.delayMode||t.easing)&&this.props.FilterContainer.updateFilterItemsTransitionStyle(t.animationDuration,t.easing,t.delay,t.delayMode),(t.callbacks||t.animationDuration)&&this.rebindFilterContainerEvents(),t.filter&&this.filter(t.filter),t.multifilterLogicalOperator&&this.filter(this.options.filter)}},{key:"toggleFilter",value:function(t){var e=this.options.filter;"all"===e?e=t:Array.isArray(e)?e.includes(t)?(e=e.filter(function(e){return e!==t}),1===e.length&&(e=e[0])):e.push(t):e=e===t?"all":[e,t],this.options.filter=e,this.filter(this.options.filter)}},{key:"filterFilterItems",value:function(t,e){var r=this.options.multifilterLogicalOperator;return"all"===e?t:t.filter(function(t){var i=t.getCategories();return Array.isArray(e)?"or"===r?Object(l.g)(i,e).length:Object(l.b)(e,i):i.includes(e)})}},{key:"sortFilterItems",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"index",r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:"asc",i=Object(l.k)(t,function(t){return"index"!==e&&"sortData"!==e?t.props.data[e]:t.props[e]});return"asc"===r?i:i.reverse()}},{key:"searchFilterItems",value:function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:this.props.searchTerm;return e?t.filter(function(t){return t.contentsMatchSearch(e)}):t}},{key:"shuffleFilterItems",value:function(t){for(var e=Object(l.j)(t);t.length>1&&Object(l.f)(t,e);)e=Object(l.j)(t);return e}},{key:"render",value:function(t){var e=this,r=this.options,i=r.filter,n=r.filterInCss,o=r.filterOutCss,a=r.layout,u=r.multifilterLogicalOperator;this.props.FilterItems.filter(function(t){var r=t.getCategories(),n=Array.isArray(i),o=t.contentsMatchSearch(e.props.searchTerm);return!(n?"or"===u?Object(l.g)(r,i).length:Object(l.b)(i,r):r.includes(i))||!o}).forEach(function(t){t.filterOut(o)});var c=Object(s.a)(a,this);t.forEach(function(t,e){t.filterIn(c[e],n)})}},{key:"onTransitionEndCallback",value:function(){var t=this.props,e=t.filterizrState,r=t.FilterContainer;switch(e){case l.a.FILTERING:r.trigger("filteringEnd");break;case l.a.SORTING:r.trigger("sortingEnd");break;case l.a.SHUFFLING:r.trigger("shufflingEnd")}this.props.filterizrState=l.a.IDLE}},{key:"rebindFilterContainerEvents",value:function(){var t=this,e=this.props.FilterContainer,r=this.options,i=r.animationDuration,n=r.callbacks;e.unbindEvents(),e.bindEvents(n),e.bindTransitionEnd(function(){t.onTransitionEndCallback()},i)}},{key:"bindEvents",value:function(){var t=this,e=this.props.FilterContainer;this.rebindFilterContainerEvents(),c(window).on("resize.Filterizr",Object(l.e)(function(){e.updateWidth(),e.updateFilterItemsDimensions(),t.filter(t.options.filter)},250))}}]),t}();e.a=f},function(t,e,r){"use strict";function i(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")}var n=r(0),o=function(){function t(t,e){for(var r=0;r1&&void 0!==arguments[1]?arguments[1]:"";i(this,t),this.props={Filterizr:e,selector:r},this.setupFilterControls(),this.setupShuffleControls(),this.setupSearchControls(),this.setupSortControls()}return o(t,[{key:"setupFilterControls",value:function(){var t=this.props,e=t.Filterizr,r=t.selector;s(r+"[data-filter]").on("click.Filterizr",function(t){var r=s(t.currentTarget),i=r.attr("data-filter");e.options.filter=i,e.filter(e.options.filter)}),s(r+"[data-multifilter]").on("click.Filterizr",function(t){var r=s(t.target),i=r.attr("data-multifilter");e.toggleFilter(i)})}},{key:"setupShuffleControls",value:function(){var t=this.props,e=t.Filterizr,r=t.selector;s(r+"[data-shuffle]").on("click.Filterizr",function(){e.shuffle()})}},{key:"setupSearchControls",value:function(){var t=this.props,e=t.Filterizr,r=t.selector;s(r+"[data-search]").on("keyup.Filterizr",Object(n.e)(function(t){var r=s(t.target),i=r.val();e.props.searchTerm=i.toLowerCase(),e.search(e.props.searchTerm)},250))}},{key:"setupSortControls",value:function(){var t=this.props,e=t.Filterizr,r=t.selector;s(r+"[data-sortAsc]").on("click.Filterizr",function(){var t=s(r+"[data-sortOrder]").val();e.props.sortOrder="asc",e.sort(t,"asc")}),s(r+"[data-sortDesc]").on("click.Filterizr",function(){var t=s(r+"[data-sortOrder]").val();e.props.sortOrder="desc",e.sort(t,"desc")})}}]),t}();e.a=a},function(t,e,r){"use strict";var i=r(9),n=r(10),o=r(11),s=r(12),a=r(13),l=r(14),u=function(t,e){switch(t){case"horizontal":return Object(i.a)(e);case"vertical":return Object(n.a)(e);case"sameHeight":return Object(o.a)(e);case"sameWidth":return Object(s.a)(e);case"sameSize":return Object(a.a)(e);case"packed":return Object(l.a)(e);default:return Object(a.a)(e)}};e.a=u},function(t,e,r){"use strict";var i=function(t){var e=t.props,r=e.FilterContainer,i=e.FilteredItems,n=0,o=0,s=i.map(function(t){var e=t.props,r=e.w,i=e.h,s={left:n,top:0};return n+=r,i>o&&(o=i),s});return r.updateHeight(o),s};e.a=i},function(t,e,r){"use strict";var i=function(t){var e=t.props,r=e.FilterContainer,i=e.FilteredItems,n=0,o=i.map(function(t){var e=t.props.h,r={left:0,top:n};return n+=e,r});return r.updateHeight(n),o};e.a=i},function(t,e,r){"use strict";var i=function(t){var e=t.props,r=e.FilterContainer,i=e.FilteredItems,n=r.props.w,o=i[0].props.h,s=0,a=0,l=i.map(function(t){var e=t.props.w;a+e>n&&(s++,a=0);var r={left:a,top:o*s};return a+=e,r});return r.updateHeight((s+1)*i[0].props.h),l};e.a=i},function(t,e,r){"use strict";function i(t){if(Array.isArray(t)){for(var e=0,r=Array(t.length);e=s&&a++;var c=e-s*a;return l[c]+=u,{left:c*i,top:o(n,s,e)}});return r.updateHeight(Math.max.apply(Math,i(l))),u},o=function(t,e,r){if(e<=0)return 0;var i=0;if(r=0;)i+=t[r].props.h,r-=e;return i};e.a=n},function(t,e,r){"use strict";var i=function(t){var e=t.props,r=e.FilterContainer,i=e.FilteredItems,n=r.calcColumns(),o=0,s=i.map(function(t,e){return e%n==0&&e>=n&&o++,{left:(e-n*o)*t.props.w,top:o*t.props.h}}),a=i[0]&&i[0].props.h||0;return r.updateHeight((o+1)*a),s};e.a=i},function(t,e,r){"use strict";var i=function(t){var e=t.props,r=e.FilterContainer,i=e.FilteredItems,o=new n(r.props.w),s=i.map(function(t){var e=t.props;return{w:e.w,h:e.h}});o.fit(s);var a=s.map(function(t){var e=t.fit;return{left:e.x,top:e.y}});return r.updateHeight(o.root.h),a},n=function(t){this.init(t)};n.prototype={init:function(t){this.root={x:0,y:0,w:t}},fit:function(t){var e,r,i,n=t.length,o=n>0?t[0].h:0;for(this.root.h=o,e=0;ethis.length)&&-1!==this.indexOf(t,e)}),Array.prototype.includes||Object.defineProperty(Array.prototype,"includes",{value:function(t,e){if(null==this)throw new TypeError('"this" is null or not defined');var r=Object(this),i=r.length>>>0;if(0===i)return!1;for(var n=0|e,o=Math.max(n>=0?n:i-Math.abs(n),0);oli{position:relative}.fa-li{position:absolute;left:-2.14285714em;width:2.14285714em;top:.14285714em;text-align:center}.fa-li.fa-lg{left:-1.85714286em}.fa-border{padding:.2em .25em .15em;border:solid .08em #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa.fa-pull-left{margin-right:.3em}.fa.fa-pull-right{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left{margin-right:.3em}.fa.pull-right{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s infinite linear;animation:fa-spin 2s infinite linear}.fa-pulse{-webkit-animation:fa-spin 1s infinite steps(8);animation:fa-spin 1s infinite steps(8)}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}100%{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scale(-1, 1);-ms-transform:scale(-1, 1);transform:scale(-1, 1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scale(1, -1);-ms-transform:scale(1, -1);transform:scale(1, -1)}:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270,:root .fa-flip-horizontal,:root .fa-flip-vertical{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:"\f000"}.fa-music:before{content:"\f001"}.fa-search:before{content:"\f002"}.fa-envelope-o:before{content:"\f003"}.fa-heart:before{content:"\f004"}.fa-star:before{content:"\f005"}.fa-star-o:before{content:"\f006"}.fa-user:before{content:"\f007"}.fa-film:before{content:"\f008"}.fa-th-large:before{content:"\f009"}.fa-th:before{content:"\f00a"}.fa-th-list:before{content:"\f00b"}.fa-check:before{content:"\f00c"}.fa-remove:before,.fa-close:before,.fa-times:before{content:"\f00d"}.fa-search-plus:before{content:"\f00e"}.fa-search-minus:before{content:"\f010"}.fa-power-off:before{content:"\f011"}.fa-signal:before{content:"\f012"}.fa-gear:before,.fa-cog:before{content:"\f013"}.fa-trash-o:before{content:"\f014"}.fa-home:before{content:"\f015"}.fa-file-o:before{content:"\f016"}.fa-clock-o:before{content:"\f017"}.fa-road:before{content:"\f018"}.fa-download:before{content:"\f019"}.fa-arrow-circle-o-down:before{content:"\f01a"}.fa-arrow-circle-o-up:before{content:"\f01b"}.fa-inbox:before{content:"\f01c"}.fa-play-circle-o:before{content:"\f01d"}.fa-rotate-right:before,.fa-repeat:before{content:"\f01e"}.fa-refresh:before{content:"\f021"}.fa-list-alt:before{content:"\f022"}.fa-lock:before{content:"\f023"}.fa-flag:before{content:"\f024"}.fa-headphones:before{content:"\f025"}.fa-volume-off:before{content:"\f026"}.fa-volume-down:before{content:"\f027"}.fa-volume-up:before{content:"\f028"}.fa-qrcode:before{content:"\f029"}.fa-barcode:before{content:"\f02a"}.fa-tag:before{content:"\f02b"}.fa-tags:before{content:"\f02c"}.fa-book:before{content:"\f02d"}.fa-bookmark:before{content:"\f02e"}.fa-print:before{content:"\f02f"}.fa-camera:before{content:"\f030"}.fa-font:before{content:"\f031"}.fa-bold:before{content:"\f032"}.fa-italic:before{content:"\f033"}.fa-text-height:before{content:"\f034"}.fa-text-width:before{content:"\f035"}.fa-align-left:before{content:"\f036"}.fa-align-center:before{content:"\f037"}.fa-align-right:before{content:"\f038"}.fa-align-justify:before{content:"\f039"}.fa-list:before{content:"\f03a"}.fa-dedent:before,.fa-outdent:before{content:"\f03b"}.fa-indent:before{content:"\f03c"}.fa-video-camera:before{content:"\f03d"}.fa-photo:before,.fa-image:before,.fa-picture-o:before{content:"\f03e"}.fa-pencil:before{content:"\f040"}.fa-map-marker:before{content:"\f041"}.fa-adjust:before{content:"\f042"}.fa-tint:before{content:"\f043"}.fa-edit:before,.fa-pencil-square-o:before{content:"\f044"}.fa-share-square-o:before{content:"\f045"}.fa-check-square-o:before{content:"\f046"}.fa-arrows:before{content:"\f047"}.fa-step-backward:before{content:"\f048"}.fa-fast-backward:before{content:"\f049"}.fa-backward:before{content:"\f04a"}.fa-play:before{content:"\f04b"}.fa-pause:before{content:"\f04c"}.fa-stop:before{content:"\f04d"}.fa-forward:before{content:"\f04e"}.fa-fast-forward:before{content:"\f050"}.fa-step-forward:before{content:"\f051"}.fa-eject:before{content:"\f052"}.fa-chevron-left:before{content:"\f053"}.fa-chevron-right:before{content:"\f054"}.fa-plus-circle:before{content:"\f055"}.fa-minus-circle:before{content:"\f056"}.fa-times-circle:before{content:"\f057"}.fa-check-circle:before{content:"\f058"}.fa-question-circle:before{content:"\f059"}.fa-info-circle:before{content:"\f05a"}.fa-crosshairs:before{content:"\f05b"}.fa-times-circle-o:before{content:"\f05c"}.fa-check-circle-o:before{content:"\f05d"}.fa-ban:before{content:"\f05e"}.fa-arrow-left:before{content:"\f060"}.fa-arrow-right:before{content:"\f061"}.fa-arrow-up:before{content:"\f062"}.fa-arrow-down:before{content:"\f063"}.fa-mail-forward:before,.fa-share:before{content:"\f064"}.fa-expand:before{content:"\f065"}.fa-compress:before{content:"\f066"}.fa-plus:before{content:"\f067"}.fa-minus:before{content:"\f068"}.fa-asterisk:before{content:"\f069"}.fa-exclamation-circle:before{content:"\f06a"}.fa-gift:before{content:"\f06b"}.fa-leaf:before{content:"\f06c"}.fa-fire:before{content:"\f06d"}.fa-eye:before{content:"\f06e"}.fa-eye-slash:before{content:"\f070"}.fa-warning:before,.fa-exclamation-triangle:before{content:"\f071"}.fa-plane:before{content:"\f072"}.fa-calendar:before{content:"\f073"}.fa-random:before{content:"\f074"}.fa-comment:before{content:"\f075"}.fa-magnet:before{content:"\f076"}.fa-chevron-up:before{content:"\f077"}.fa-chevron-down:before{content:"\f078"}.fa-retweet:before{content:"\f079"}.fa-shopping-cart:before{content:"\f07a"}.fa-folder:before{content:"\f07b"}.fa-folder-open:before{content:"\f07c"}.fa-arrows-v:before{content:"\f07d"}.fa-arrows-h:before{content:"\f07e"}.fa-bar-chart-o:before,.fa-bar-chart:before{content:"\f080"}.fa-twitter-square:before{content:"\f081"}.fa-facebook-square:before{content:"\f082"}.fa-camera-retro:before{content:"\f083"}.fa-key:before{content:"\f084"}.fa-gears:before,.fa-cogs:before{content:"\f085"}.fa-comments:before{content:"\f086"}.fa-thumbs-o-up:before{content:"\f087"}.fa-thumbs-o-down:before{content:"\f088"}.fa-star-half:before{content:"\f089"}.fa-heart-o:before{content:"\f08a"}.fa-sign-out:before{content:"\f08b"}.fa-linkedin-square:before{content:"\f08c"}.fa-thumb-tack:before{content:"\f08d"}.fa-external-link:before{content:"\f08e"}.fa-sign-in:before{content:"\f090"}.fa-trophy:before{content:"\f091"}.fa-github-square:before{content:"\f092"}.fa-upload:before{content:"\f093"}.fa-lemon-o:before{content:"\f094"}.fa-phone:before{content:"\f095"}.fa-square-o:before{content:"\f096"}.fa-bookmark-o:before{content:"\f097"}.fa-phone-square:before{content:"\f098"}.fa-twitter:before{content:"\f099"}.fa-facebook-f:before,.fa-facebook:before{content:"\f09a"}.fa-github:before{content:"\f09b"}.fa-unlock:before{content:"\f09c"}.fa-credit-card:before{content:"\f09d"}.fa-feed:before,.fa-rss:before{content:"\f09e"}.fa-hdd-o:before{content:"\f0a0"}.fa-bullhorn:before{content:"\f0a1"}.fa-bell:before{content:"\f0f3"}.fa-certificate:before{content:"\f0a3"}.fa-hand-o-right:before{content:"\f0a4"}.fa-hand-o-left:before{content:"\f0a5"}.fa-hand-o-up:before{content:"\f0a6"}.fa-hand-o-down:before{content:"\f0a7"}.fa-arrow-circle-left:before{content:"\f0a8"}.fa-arrow-circle-right:before{content:"\f0a9"}.fa-arrow-circle-up:before{content:"\f0aa"}.fa-arrow-circle-down:before{content:"\f0ab"}.fa-globe:before{content:"\f0ac"}.fa-wrench:before{content:"\f0ad"}.fa-tasks:before{content:"\f0ae"}.fa-filter:before{content:"\f0b0"}.fa-briefcase:before{content:"\f0b1"}.fa-arrows-alt:before{content:"\f0b2"}.fa-group:before,.fa-users:before{content:"\f0c0"}.fa-chain:before,.fa-link:before{content:"\f0c1"}.fa-cloud:before{content:"\f0c2"}.fa-flask:before{content:"\f0c3"}.fa-cut:before,.fa-scissors:before{content:"\f0c4"}.fa-copy:before,.fa-files-o:before{content:"\f0c5"}.fa-paperclip:before{content:"\f0c6"}.fa-save:before,.fa-floppy-o:before{content:"\f0c7"}.fa-square:before{content:"\f0c8"}.fa-navicon:before,.fa-reorder:before,.fa-bars:before{content:"\f0c9"}.fa-list-ul:before{content:"\f0ca"}.fa-list-ol:before{content:"\f0cb"}.fa-strikethrough:before{content:"\f0cc"}.fa-underline:before{content:"\f0cd"}.fa-table:before{content:"\f0ce"}.fa-magic:before{content:"\f0d0"}.fa-truck:before{content:"\f0d1"}.fa-pinterest:before{content:"\f0d2"}.fa-pinterest-square:before{content:"\f0d3"}.fa-google-plus-square:before{content:"\f0d4"}.fa-google-plus:before{content:"\f0d5"}.fa-money:before{content:"\f0d6"}.fa-caret-down:before{content:"\f0d7"}.fa-caret-up:before{content:"\f0d8"}.fa-caret-left:before{content:"\f0d9"}.fa-caret-right:before{content:"\f0da"}.fa-columns:before{content:"\f0db"}.fa-unsorted:before,.fa-sort:before{content:"\f0dc"}.fa-sort-down:before,.fa-sort-desc:before{content:"\f0dd"}.fa-sort-up:before,.fa-sort-asc:before{content:"\f0de"}.fa-envelope:before{content:"\f0e0"}.fa-linkedin:before{content:"\f0e1"}.fa-rotate-left:before,.fa-undo:before{content:"\f0e2"}.fa-legal:before,.fa-gavel:before{content:"\f0e3"}.fa-dashboard:before,.fa-tachometer:before{content:"\f0e4"}.fa-comment-o:before{content:"\f0e5"}.fa-comments-o:before{content:"\f0e6"}.fa-flash:before,.fa-bolt:before{content:"\f0e7"}.fa-sitemap:before{content:"\f0e8"}.fa-umbrella:before{content:"\f0e9"}.fa-paste:before,.fa-clipboard:before{content:"\f0ea"}.fa-lightbulb-o:before{content:"\f0eb"}.fa-exchange:before{content:"\f0ec"}.fa-cloud-download:before{content:"\f0ed"}.fa-cloud-upload:before{content:"\f0ee"}.fa-user-md:before{content:"\f0f0"}.fa-stethoscope:before{content:"\f0f1"}.fa-suitcase:before{content:"\f0f2"}.fa-bell-o:before{content:"\f0a2"}.fa-coffee:before{content:"\f0f4"}.fa-cutlery:before{content:"\f0f5"}.fa-file-text-o:before{content:"\f0f6"}.fa-building-o:before{content:"\f0f7"}.fa-hospital-o:before{content:"\f0f8"}.fa-ambulance:before{content:"\f0f9"}.fa-medkit:before{content:"\f0fa"}.fa-fighter-jet:before{content:"\f0fb"}.fa-beer:before{content:"\f0fc"}.fa-h-square:before{content:"\f0fd"}.fa-plus-square:before{content:"\f0fe"}.fa-angle-double-left:before{content:"\f100"}.fa-angle-double-right:before{content:"\f101"}.fa-angle-double-up:before{content:"\f102"}.fa-angle-double-down:before{content:"\f103"}.fa-angle-left:before{content:"\f104"}.fa-angle-right:before{content:"\f105"}.fa-angle-up:before{content:"\f106"}.fa-angle-down:before{content:"\f107"}.fa-desktop:before{content:"\f108"}.fa-laptop:before{content:"\f109"}.fa-tablet:before{content:"\f10a"}.fa-mobile-phone:before,.fa-mobile:before{content:"\f10b"}.fa-circle-o:before{content:"\f10c"}.fa-quote-left:before{content:"\f10d"}.fa-quote-right:before{content:"\f10e"}.fa-spinner:before{content:"\f110"}.fa-circle:before{content:"\f111"}.fa-mail-reply:before,.fa-reply:before{content:"\f112"}.fa-github-alt:before{content:"\f113"}.fa-folder-o:before{content:"\f114"}.fa-folder-open-o:before{content:"\f115"}.fa-smile-o:before{content:"\f118"}.fa-frown-o:before{content:"\f119"}.fa-meh-o:before{content:"\f11a"}.fa-gamepad:before{content:"\f11b"}.fa-keyboard-o:before{content:"\f11c"}.fa-flag-o:before{content:"\f11d"}.fa-flag-checkered:before{content:"\f11e"}.fa-terminal:before{content:"\f120"}.fa-code:before{content:"\f121"}.fa-mail-reply-all:before,.fa-reply-all:before{content:"\f122"}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:"\f123"}.fa-location-arrow:before{content:"\f124"}.fa-crop:before{content:"\f125"}.fa-code-fork:before{content:"\f126"}.fa-unlink:before,.fa-chain-broken:before{content:"\f127"}.fa-question:before{content:"\f128"}.fa-info:before{content:"\f129"}.fa-exclamation:before{content:"\f12a"}.fa-superscript:before{content:"\f12b"}.fa-subscript:before{content:"\f12c"}.fa-eraser:before{content:"\f12d"}.fa-puzzle-piece:before{content:"\f12e"}.fa-microphone:before{content:"\f130"}.fa-microphone-slash:before{content:"\f131"}.fa-shield:before{content:"\f132"}.fa-calendar-o:before{content:"\f133"}.fa-fire-extinguisher:before{content:"\f134"}.fa-rocket:before{content:"\f135"}.fa-maxcdn:before{content:"\f136"}.fa-chevron-circle-left:before{content:"\f137"}.fa-chevron-circle-right:before{content:"\f138"}.fa-chevron-circle-up:before{content:"\f139"}.fa-chevron-circle-down:before{content:"\f13a"}.fa-html5:before{content:"\f13b"}.fa-css3:before{content:"\f13c"}.fa-anchor:before{content:"\f13d"}.fa-unlock-alt:before{content:"\f13e"}.fa-bullseye:before{content:"\f140"}.fa-ellipsis-h:before{content:"\f141"}.fa-ellipsis-v:before{content:"\f142"}.fa-rss-square:before{content:"\f143"}.fa-play-circle:before{content:"\f144"}.fa-ticket:before{content:"\f145"}.fa-minus-square:before{content:"\f146"}.fa-minus-square-o:before{content:"\f147"}.fa-level-up:before{content:"\f148"}.fa-level-down:before{content:"\f149"}.fa-check-square:before{content:"\f14a"}.fa-pencil-square:before{content:"\f14b"}.fa-external-link-square:before{content:"\f14c"}.fa-share-square:before{content:"\f14d"}.fa-compass:before{content:"\f14e"}.fa-toggle-down:before,.fa-caret-square-o-down:before{content:"\f150"}.fa-toggle-up:before,.fa-caret-square-o-up:before{content:"\f151"}.fa-toggle-right:before,.fa-caret-square-o-right:before{content:"\f152"}.fa-euro:before,.fa-eur:before{content:"\f153"}.fa-gbp:before{content:"\f154"}.fa-dollar:before,.fa-usd:before{content:"\f155"}.fa-rupee:before,.fa-inr:before{content:"\f156"}.fa-cny:before,.fa-rmb:before,.fa-yen:before,.fa-jpy:before{content:"\f157"}.fa-ruble:before,.fa-rouble:before,.fa-rub:before{content:"\f158"}.fa-won:before,.fa-krw:before{content:"\f159"}.fa-bitcoin:before,.fa-btc:before{content:"\f15a"}.fa-file:before{content:"\f15b"}.fa-file-text:before{content:"\f15c"}.fa-sort-alpha-asc:before{content:"\f15d"}.fa-sort-alpha-desc:before{content:"\f15e"}.fa-sort-amount-asc:before{content:"\f160"}.fa-sort-amount-desc:before{content:"\f161"}.fa-sort-numeric-asc:before{content:"\f162"}.fa-sort-numeric-desc:before{content:"\f163"}.fa-thumbs-up:before{content:"\f164"}.fa-thumbs-down:before{content:"\f165"}.fa-youtube-square:before{content:"\f166"}.fa-youtube:before{content:"\f167"}.fa-xing:before{content:"\f168"}.fa-xing-square:before{content:"\f169"}.fa-youtube-play:before{content:"\f16a"}.fa-dropbox:before{content:"\f16b"}.fa-stack-overflow:before{content:"\f16c"}.fa-instagram:before{content:"\f16d"}.fa-flickr:before{content:"\f16e"}.fa-adn:before{content:"\f170"}.fa-bitbucket:before{content:"\f171"}.fa-bitbucket-square:before{content:"\f172"}.fa-tumblr:before{content:"\f173"}.fa-tumblr-square:before{content:"\f174"}.fa-long-arrow-down:before{content:"\f175"}.fa-long-arrow-up:before{content:"\f176"}.fa-long-arrow-left:before{content:"\f177"}.fa-long-arrow-right:before{content:"\f178"}.fa-apple:before{content:"\f179"}.fa-windows:before{content:"\f17a"}.fa-android:before{content:"\f17b"}.fa-linux:before{content:"\f17c"}.fa-dribbble:before{content:"\f17d"}.fa-skype:before{content:"\f17e"}.fa-foursquare:before{content:"\f180"}.fa-trello:before{content:"\f181"}.fa-female:before{content:"\f182"}.fa-male:before{content:"\f183"}.fa-gittip:before,.fa-gratipay:before{content:"\f184"}.fa-sun-o:before{content:"\f185"}.fa-moon-o:before{content:"\f186"}.fa-archive:before{content:"\f187"}.fa-bug:before{content:"\f188"}.fa-vk:before{content:"\f189"}.fa-weibo:before{content:"\f18a"}.fa-renren:before{content:"\f18b"}.fa-pagelines:before{content:"\f18c"}.fa-stack-exchange:before{content:"\f18d"}.fa-arrow-circle-o-right:before{content:"\f18e"}.fa-arrow-circle-o-left:before{content:"\f190"}.fa-toggle-left:before,.fa-caret-square-o-left:before{content:"\f191"}.fa-dot-circle-o:before{content:"\f192"}.fa-wheelchair:before{content:"\f193"}.fa-vimeo-square:before{content:"\f194"}.fa-turkish-lira:before,.fa-try:before{content:"\f195"}.fa-plus-square-o:before{content:"\f196"}.fa-space-shuttle:before{content:"\f197"}.fa-slack:before{content:"\f198"}.fa-envelope-square:before{content:"\f199"}.fa-wordpress:before{content:"\f19a"}.fa-openid:before{content:"\f19b"}.fa-institution:before,.fa-bank:before,.fa-university:before{content:"\f19c"}.fa-mortar-board:before,.fa-graduation-cap:before{content:"\f19d"}.fa-yahoo:before{content:"\f19e"}.fa-google:before{content:"\f1a0"}.fa-reddit:before{content:"\f1a1"}.fa-reddit-square:before{content:"\f1a2"}.fa-stumbleupon-circle:before{content:"\f1a3"}.fa-stumbleupon:before{content:"\f1a4"}.fa-delicious:before{content:"\f1a5"}.fa-digg:before{content:"\f1a6"}.fa-pied-piper-pp:before{content:"\f1a7"}.fa-pied-piper-alt:before{content:"\f1a8"}.fa-drupal:before{content:"\f1a9"}.fa-joomla:before{content:"\f1aa"}.fa-language:before{content:"\f1ab"}.fa-fax:before{content:"\f1ac"}.fa-building:before{content:"\f1ad"}.fa-child:before{content:"\f1ae"}.fa-paw:before{content:"\f1b0"}.fa-spoon:before{content:"\f1b1"}.fa-cube:before{content:"\f1b2"}.fa-cubes:before{content:"\f1b3"}.fa-behance:before{content:"\f1b4"}.fa-behance-square:before{content:"\f1b5"}.fa-steam:before{content:"\f1b6"}.fa-steam-square:before{content:"\f1b7"}.fa-recycle:before{content:"\f1b8"}.fa-automobile:before,.fa-car:before{content:"\f1b9"}.fa-cab:before,.fa-taxi:before{content:"\f1ba"}.fa-tree:before{content:"\f1bb"}.fa-spotify:before{content:"\f1bc"}.fa-deviantart:before{content:"\f1bd"}.fa-soundcloud:before{content:"\f1be"}.fa-database:before{content:"\f1c0"}.fa-file-pdf-o:before{content:"\f1c1"}.fa-file-word-o:before{content:"\f1c2"}.fa-file-excel-o:before{content:"\f1c3"}.fa-file-powerpoint-o:before{content:"\f1c4"}.fa-file-photo-o:before,.fa-file-picture-o:before,.fa-file-image-o:before{content:"\f1c5"}.fa-file-zip-o:before,.fa-file-archive-o:before{content:"\f1c6"}.fa-file-sound-o:before,.fa-file-audio-o:before{content:"\f1c7"}.fa-file-movie-o:before,.fa-file-video-o:before{content:"\f1c8"}.fa-file-code-o:before{content:"\f1c9"}.fa-vine:before{content:"\f1ca"}.fa-codepen:before{content:"\f1cb"}.fa-jsfiddle:before{content:"\f1cc"}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-saver:before,.fa-support:before,.fa-life-ring:before{content:"\f1cd"}.fa-circle-o-notch:before{content:"\f1ce"}.fa-ra:before,.fa-resistance:before,.fa-rebel:before{content:"\f1d0"}.fa-ge:before,.fa-empire:before{content:"\f1d1"}.fa-git-square:before{content:"\f1d2"}.fa-git:before{content:"\f1d3"}.fa-y-combinator-square:before,.fa-yc-square:before,.fa-hacker-news:before{content:"\f1d4"}.fa-tencent-weibo:before{content:"\f1d5"}.fa-qq:before{content:"\f1d6"}.fa-wechat:before,.fa-weixin:before{content:"\f1d7"}.fa-send:before,.fa-paper-plane:before{content:"\f1d8"}.fa-send-o:before,.fa-paper-plane-o:before{content:"\f1d9"}.fa-history:before{content:"\f1da"}.fa-circle-thin:before{content:"\f1db"}.fa-header:before{content:"\f1dc"}.fa-paragraph:before{content:"\f1dd"}.fa-sliders:before{content:"\f1de"}.fa-share-alt:before{content:"\f1e0"}.fa-share-alt-square:before{content:"\f1e1"}.fa-bomb:before{content:"\f1e2"}.fa-soccer-ball-o:before,.fa-futbol-o:before{content:"\f1e3"}.fa-tty:before{content:"\f1e4"}.fa-binoculars:before{content:"\f1e5"}.fa-plug:before{content:"\f1e6"}.fa-slideshare:before{content:"\f1e7"}.fa-twitch:before{content:"\f1e8"}.fa-yelp:before{content:"\f1e9"}.fa-newspaper-o:before{content:"\f1ea"}.fa-wifi:before{content:"\f1eb"}.fa-calculator:before{content:"\f1ec"}.fa-paypal:before{content:"\f1ed"}.fa-google-wallet:before{content:"\f1ee"}.fa-cc-visa:before{content:"\f1f0"}.fa-cc-mastercard:before{content:"\f1f1"}.fa-cc-discover:before{content:"\f1f2"}.fa-cc-amex:before{content:"\f1f3"}.fa-cc-paypal:before{content:"\f1f4"}.fa-cc-stripe:before{content:"\f1f5"}.fa-bell-slash:before{content:"\f1f6"}.fa-bell-slash-o:before{content:"\f1f7"}.fa-trash:before{content:"\f1f8"}.fa-copyright:before{content:"\f1f9"}.fa-at:before{content:"\f1fa"}.fa-eyedropper:before{content:"\f1fb"}.fa-paint-brush:before{content:"\f1fc"}.fa-birthday-cake:before{content:"\f1fd"}.fa-area-chart:before{content:"\f1fe"}.fa-pie-chart:before{content:"\f200"}.fa-line-chart:before{content:"\f201"}.fa-lastfm:before{content:"\f202"}.fa-lastfm-square:before{content:"\f203"}.fa-toggle-off:before{content:"\f204"}.fa-toggle-on:before{content:"\f205"}.fa-bicycle:before{content:"\f206"}.fa-bus:before{content:"\f207"}.fa-ioxhost:before{content:"\f208"}.fa-angellist:before{content:"\f209"}.fa-cc:before{content:"\f20a"}.fa-shekel:before,.fa-sheqel:before,.fa-ils:before{content:"\f20b"}.fa-meanpath:before{content:"\f20c"}.fa-buysellads:before{content:"\f20d"}.fa-connectdevelop:before{content:"\f20e"}.fa-dashcube:before{content:"\f210"}.fa-forumbee:before{content:"\f211"}.fa-leanpub:before{content:"\f212"}.fa-sellsy:before{content:"\f213"}.fa-shirtsinbulk:before{content:"\f214"}.fa-simplybuilt:before{content:"\f215"}.fa-skyatlas:before{content:"\f216"}.fa-cart-plus:before{content:"\f217"}.fa-cart-arrow-down:before{content:"\f218"}.fa-diamond:before{content:"\f219"}.fa-ship:before{content:"\f21a"}.fa-user-secret:before{content:"\f21b"}.fa-motorcycle:before{content:"\f21c"}.fa-street-view:before{content:"\f21d"}.fa-heartbeat:before{content:"\f21e"}.fa-venus:before{content:"\f221"}.fa-mars:before{content:"\f222"}.fa-mercury:before{content:"\f223"}.fa-intersex:before,.fa-transgender:before{content:"\f224"}.fa-transgender-alt:before{content:"\f225"}.fa-venus-double:before{content:"\f226"}.fa-mars-double:before{content:"\f227"}.fa-venus-mars:before{content:"\f228"}.fa-mars-stroke:before{content:"\f229"}.fa-mars-stroke-v:before{content:"\f22a"}.fa-mars-stroke-h:before{content:"\f22b"}.fa-neuter:before{content:"\f22c"}.fa-genderless:before{content:"\f22d"}.fa-facebook-official:before{content:"\f230"}.fa-pinterest-p:before{content:"\f231"}.fa-whatsapp:before{content:"\f232"}.fa-server:before{content:"\f233"}.fa-user-plus:before{content:"\f234"}.fa-user-times:before{content:"\f235"}.fa-hotel:before,.fa-bed:before{content:"\f236"}.fa-viacoin:before{content:"\f237"}.fa-train:before{content:"\f238"}.fa-subway:before{content:"\f239"}.fa-medium:before{content:"\f23a"}.fa-yc:before,.fa-y-combinator:before{content:"\f23b"}.fa-optin-monster:before{content:"\f23c"}.fa-opencart:before{content:"\f23d"}.fa-expeditedssl:before{content:"\f23e"}.fa-battery-4:before,.fa-battery:before,.fa-battery-full:before{content:"\f240"}.fa-battery-3:before,.fa-battery-three-quarters:before{content:"\f241"}.fa-battery-2:before,.fa-battery-half:before{content:"\f242"}.fa-battery-1:before,.fa-battery-quarter:before{content:"\f243"}.fa-battery-0:before,.fa-battery-empty:before{content:"\f244"}.fa-mouse-pointer:before{content:"\f245"}.fa-i-cursor:before{content:"\f246"}.fa-object-group:before{content:"\f247"}.fa-object-ungroup:before{content:"\f248"}.fa-sticky-note:before{content:"\f249"}.fa-sticky-note-o:before{content:"\f24a"}.fa-cc-jcb:before{content:"\f24b"}.fa-cc-diners-club:before{content:"\f24c"}.fa-clone:before{content:"\f24d"}.fa-balance-scale:before{content:"\f24e"}.fa-hourglass-o:before{content:"\f250"}.fa-hourglass-1:before,.fa-hourglass-start:before{content:"\f251"}.fa-hourglass-2:before,.fa-hourglass-half:before{content:"\f252"}.fa-hourglass-3:before,.fa-hourglass-end:before{content:"\f253"}.fa-hourglass:before{content:"\f254"}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:"\f255"}.fa-hand-stop-o:before,.fa-hand-paper-o:before{content:"\f256"}.fa-hand-scissors-o:before{content:"\f257"}.fa-hand-lizard-o:before{content:"\f258"}.fa-hand-spock-o:before{content:"\f259"}.fa-hand-pointer-o:before{content:"\f25a"}.fa-hand-peace-o:before{content:"\f25b"}.fa-trademark:before{content:"\f25c"}.fa-registered:before{content:"\f25d"}.fa-creative-commons:before{content:"\f25e"}.fa-gg:before{content:"\f260"}.fa-gg-circle:before{content:"\f261"}.fa-tripadvisor:before{content:"\f262"}.fa-odnoklassniki:before{content:"\f263"}.fa-odnoklassniki-square:before{content:"\f264"}.fa-get-pocket:before{content:"\f265"}.fa-wikipedia-w:before{content:"\f266"}.fa-safari:before{content:"\f267"}.fa-chrome:before{content:"\f268"}.fa-firefox:before{content:"\f269"}.fa-opera:before{content:"\f26a"}.fa-internet-explorer:before{content:"\f26b"}.fa-tv:before,.fa-television:before{content:"\f26c"}.fa-contao:before{content:"\f26d"}.fa-500px:before{content:"\f26e"}.fa-amazon:before{content:"\f270"}.fa-calendar-plus-o:before{content:"\f271"}.fa-calendar-minus-o:before{content:"\f272"}.fa-calendar-times-o:before{content:"\f273"}.fa-calendar-check-o:before{content:"\f274"}.fa-industry:before{content:"\f275"}.fa-map-pin:before{content:"\f276"}.fa-map-signs:before{content:"\f277"}.fa-map-o:before{content:"\f278"}.fa-map:before{content:"\f279"}.fa-commenting:before{content:"\f27a"}.fa-commenting-o:before{content:"\f27b"}.fa-houzz:before{content:"\f27c"}.fa-vimeo:before{content:"\f27d"}.fa-black-tie:before{content:"\f27e"}.fa-fonticons:before{content:"\f280"}.fa-reddit-alien:before{content:"\f281"}.fa-edge:before{content:"\f282"}.fa-credit-card-alt:before{content:"\f283"}.fa-codiepie:before{content:"\f284"}.fa-modx:before{content:"\f285"}.fa-fort-awesome:before{content:"\f286"}.fa-usb:before{content:"\f287"}.fa-product-hunt:before{content:"\f288"}.fa-mixcloud:before{content:"\f289"}.fa-scribd:before{content:"\f28a"}.fa-pause-circle:before{content:"\f28b"}.fa-pause-circle-o:before{content:"\f28c"}.fa-stop-circle:before{content:"\f28d"}.fa-stop-circle-o:before{content:"\f28e"}.fa-shopping-bag:before{content:"\f290"}.fa-shopping-basket:before{content:"\f291"}.fa-hashtag:before{content:"\f292"}.fa-bluetooth:before{content:"\f293"}.fa-bluetooth-b:before{content:"\f294"}.fa-percent:before{content:"\f295"}.fa-gitlab:before{content:"\f296"}.fa-wpbeginner:before{content:"\f297"}.fa-wpforms:before{content:"\f298"}.fa-envira:before{content:"\f299"}.fa-universal-access:before{content:"\f29a"}.fa-wheelchair-alt:before{content:"\f29b"}.fa-question-circle-o:before{content:"\f29c"}.fa-blind:before{content:"\f29d"}.fa-audio-description:before{content:"\f29e"}.fa-volume-control-phone:before{content:"\f2a0"}.fa-braille:before{content:"\f2a1"}.fa-assistive-listening-systems:before{content:"\f2a2"}.fa-asl-interpreting:before,.fa-american-sign-language-interpreting:before{content:"\f2a3"}.fa-deafness:before,.fa-hard-of-hearing:before,.fa-deaf:before{content:"\f2a4"}.fa-glide:before{content:"\f2a5"}.fa-glide-g:before{content:"\f2a6"}.fa-signing:before,.fa-sign-language:before{content:"\f2a7"}.fa-low-vision:before{content:"\f2a8"}.fa-viadeo:before{content:"\f2a9"}.fa-viadeo-square:before{content:"\f2aa"}.fa-snapchat:before{content:"\f2ab"}.fa-snapchat-ghost:before{content:"\f2ac"}.fa-snapchat-square:before{content:"\f2ad"}.fa-pied-piper:before{content:"\f2ae"}.fa-first-order:before{content:"\f2b0"}.fa-yoast:before{content:"\f2b1"}.fa-themeisle:before{content:"\f2b2"}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:"\f2b3"}.fa-fa:before,.fa-font-awesome:before{content:"\f2b4"}.fa-handshake-o:before{content:"\f2b5"}.fa-envelope-open:before{content:"\f2b6"}.fa-envelope-open-o:before{content:"\f2b7"}.fa-linode:before{content:"\f2b8"}.fa-address-book:before{content:"\f2b9"}.fa-address-book-o:before{content:"\f2ba"}.fa-vcard:before,.fa-address-card:before{content:"\f2bb"}.fa-vcard-o:before,.fa-address-card-o:before{content:"\f2bc"}.fa-user-circle:before{content:"\f2bd"}.fa-user-circle-o:before{content:"\f2be"}.fa-user-o:before{content:"\f2c0"}.fa-id-badge:before{content:"\f2c1"}.fa-drivers-license:before,.fa-id-card:before{content:"\f2c2"}.fa-drivers-license-o:before,.fa-id-card-o:before{content:"\f2c3"}.fa-quora:before{content:"\f2c4"}.fa-free-code-camp:before{content:"\f2c5"}.fa-telegram:before{content:"\f2c6"}.fa-thermometer-4:before,.fa-thermometer:before,.fa-thermometer-full:before{content:"\f2c7"}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:"\f2c8"}.fa-thermometer-2:before,.fa-thermometer-half:before{content:"\f2c9"}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:"\f2ca"}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:"\f2cb"}.fa-shower:before{content:"\f2cc"}.fa-bathtub:before,.fa-s15:before,.fa-bath:before{content:"\f2cd"}.fa-podcast:before{content:"\f2ce"}.fa-window-maximize:before{content:"\f2d0"}.fa-window-minimize:before{content:"\f2d1"}.fa-window-restore:before{content:"\f2d2"}.fa-times-rectangle:before,.fa-window-close:before{content:"\f2d3"}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:"\f2d4"}.fa-bandcamp:before{content:"\f2d5"}.fa-grav:before{content:"\f2d6"}.fa-etsy:before{content:"\f2d7"}.fa-imdb:before{content:"\f2d8"}.fa-ravelry:before{content:"\f2d9"}.fa-eercast:before{content:"\f2da"}.fa-microchip:before{content:"\f2db"}.fa-snowflake-o:before{content:"\f2dc"}.fa-superpowers:before{content:"\f2dd"}.fa-wpexplorer:before{content:"\f2de"}.fa-meetup:before{content:"\f2e0"}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0, 0, 0, 0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}
diff --git a/plugins/fontawesome/fonts/FontAwesome.otf b/plugins/fontawesome/fonts/FontAwesome.otf
new file mode 100644
index 00000000..401ec0f3
Binary files /dev/null and b/plugins/fontawesome/fonts/FontAwesome.otf differ
diff --git a/plugins/fontawesome/fonts/fontawesome-webfont.eot b/plugins/fontawesome/fonts/fontawesome-webfont.eot
new file mode 100644
index 00000000..e9f60ca9
Binary files /dev/null and b/plugins/fontawesome/fonts/fontawesome-webfont.eot differ
diff --git a/plugins/fontawesome/fonts/fontawesome-webfont.svg b/plugins/fontawesome/fonts/fontawesome-webfont.svg
new file mode 100644
index 00000000..855c845e
--- /dev/null
+++ b/plugins/fontawesome/fonts/fontawesome-webfont.svg
@@ -0,0 +1,2671 @@
+
+
+
diff --git a/plugins/fontawesome/fonts/fontawesome-webfont.ttf b/plugins/fontawesome/fonts/fontawesome-webfont.ttf
new file mode 100644
index 00000000..35acda2f
Binary files /dev/null and b/plugins/fontawesome/fonts/fontawesome-webfont.ttf differ
diff --git a/plugins/fontawesome/fonts/fontawesome-webfont.woff b/plugins/fontawesome/fonts/fontawesome-webfont.woff
new file mode 100644
index 00000000..400014a4
Binary files /dev/null and b/plugins/fontawesome/fonts/fontawesome-webfont.woff differ
diff --git a/plugins/fontawesome/fonts/fontawesome-webfont.woff2 b/plugins/fontawesome/fonts/fontawesome-webfont.woff2
new file mode 100644
index 00000000..4d13fc60
Binary files /dev/null and b/plugins/fontawesome/fonts/fontawesome-webfont.woff2 differ
diff --git a/plugins/google-map/gmap.js b/plugins/google-map/gmap.js
new file mode 100644
index 00000000..d7c9e98e
--- /dev/null
+++ b/plugins/google-map/gmap.js
@@ -0,0 +1,80 @@
+window.marker = null;
+
+function initialize() {
+ var map;
+ var latitude = $('#map_canvas').attr('data-latitude');
+ var longitude = $('#map_canvas').attr('data-longitude');
+ var mapMarker = $('#map_canvas').attr('data-marker');
+ var mapMarkerName = $('#map_canvas').attr('data-marker-name');
+ var nottingham = new google.maps.LatLng(latitude, longitude);
+ var style = [{
+ "featureType": "road",
+ "elementType": "geometry",
+ "stylers": [{
+ "lightness": 100
+ },
+ {
+ "visibility": "simplified"
+ }
+ ]
+ },
+ {
+ "featureType": "water",
+ "elementType": "geometry",
+ "stylers": [{
+ "visibility": "on"
+ },
+ {
+ "color": "#C6E2FF"
+ }
+ ]
+ },
+ {
+ "featureType": "poi",
+ "elementType": "geometry.fill",
+ "stylers": [{
+ "color": "#C5E3BF"
+ }]
+ },
+ {
+ "featureType": "road",
+ "elementType": "geometry.fill",
+ "stylers": [{
+ "color": "#D1D1B8"
+ }]
+ }
+ ];
+ var mapOptions = {
+ center: nottingham,
+ mapTypeId: google.maps.MapTypeId.ROADMAP,
+ backgroundColor: "#000",
+ zoom: 15,
+ panControl: false,
+ zoomControl: true,
+ mapTypeControl: false,
+ scaleControl: false,
+ streetViewControl: false,
+ overviewMapControl: false,
+ zoomControlOptions: {
+ style: google.maps.ZoomControlStyle.LARGE
+ }
+ }
+ map = new google.maps.Map(document.getElementById('map_canvas'), mapOptions);
+ var mapType = new google.maps.StyledMapType(style, {
+ name: "Grayscale"
+ });
+ map.mapTypes.set('grey', mapType);
+ map.setMapTypeId('grey');
+ var marker_image = mapMarker;
+ var pinIcon = new google.maps.MarkerImage(marker_image, null, null, null, new google.maps.Size(46, 58));
+ marker = new google.maps.Marker({
+ position: nottingham,
+ map: map,
+ icon: pinIcon,
+ title: mapMarkerName
+ });
+}
+var map = document.getElementById('map_canvas');
+if (map != null) {
+ google.maps.event.addDomListener(window, 'load', initialize);
+}
\ No newline at end of file
diff --git a/plugins/jQuery/jquery.min.js b/plugins/jQuery/jquery.min.js
new file mode 100644
index 00000000..a967467b
--- /dev/null
+++ b/plugins/jQuery/jquery.min.js
@@ -0,0 +1,2 @@
+/*! jQuery v3.3.1 | (c) JS Foundation and other contributors | jquery.org/license */
+!function (e, t) { "use strict"; "object" == typeof module && "object" == typeof module.exports ? module.exports = e.document ? t(e, !0) : function (e) { if (!e.document) throw new Error("jQuery requires a window with a document"); return t(e) } : t(e) }("undefined" != typeof window ? window : this, function (e, t) { "use strict"; var n = [], r = e.document, i = Object.getPrototypeOf, o = n.slice, a = n.concat, s = n.push, u = n.indexOf, l = {}, c = l.toString, f = l.hasOwnProperty, p = f.toString, d = p.call(Object), h = {}, g = function e(t) { return "function" == typeof t && "number" != typeof t.nodeType }, y = function e(t) { return null != t && t === t.window }, v = { type: !0, src: !0, noModule: !0 }; function m(e, t, n) { var i, o = (t = t || r).createElement("script"); if (o.text = e, n) for (i in v) n[i] && (o[i] = n[i]); t.head.appendChild(o).parentNode.removeChild(o) } function x(e) { return null == e ? e + "" : "object" == typeof e || "function" == typeof e ? l[c.call(e)] || "object" : typeof e } var b = "3.3.1", w = function (e, t) { return new w.fn.init(e, t) }, T = /^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g; w.fn = w.prototype = { jquery: "3.3.1", constructor: w, length: 0, toArray: function () { return o.call(this) }, get: function (e) { return null == e ? o.call(this) : e < 0 ? this[e + this.length] : this[e] }, pushStack: function (e) { var t = w.merge(this.constructor(), e); return t.prevObject = this, t }, each: function (e) { return w.each(this, e) }, map: function (e) { return this.pushStack(w.map(this, function (t, n) { return e.call(t, n, t) })) }, slice: function () { return this.pushStack(o.apply(this, arguments)) }, first: function () { return this.eq(0) }, last: function () { return this.eq(-1) }, eq: function (e) { var t = this.length, n = +e + (e < 0 ? t : 0); return this.pushStack(n >= 0 && n < t ? [this[n]] : []) }, end: function () { return this.prevObject || this.constructor() }, push: s, sort: n.sort, splice: n.splice }, w.extend = w.fn.extend = function () { var e, t, n, r, i, o, a = arguments[0] || {}, s = 1, u = arguments.length, l = !1; for ("boolean" == typeof a && (l = a, a = arguments[s] || {}, s++), "object" == typeof a || g(a) || (a = {}), s === u && (a = this, s--); s < u; s++)if (null != (e = arguments[s])) for (t in e) n = a[t], a !== (r = e[t]) && (l && r && (w.isPlainObject(r) || (i = Array.isArray(r))) ? (i ? (i = !1, o = n && Array.isArray(n) ? n : []) : o = n && w.isPlainObject(n) ? n : {}, a[t] = w.extend(l, o, r)) : void 0 !== r && (a[t] = r)); return a }, w.extend({ expando: "jQuery" + ("3.3.1" + Math.random()).replace(/\D/g, ""), isReady: !0, error: function (e) { throw new Error(e) }, noop: function () { }, isPlainObject: function (e) { var t, n; return !(!e || "[object Object]" !== c.call(e)) && (!(t = i(e)) || "function" == typeof (n = f.call(t, "constructor") && t.constructor) && p.call(n) === d) }, isEmptyObject: function (e) { var t; for (t in e) return !1; return !0 }, globalEval: function (e) { m(e) }, each: function (e, t) { var n, r = 0; if (C(e)) { for (n = e.length; r < n; r++)if (!1 === t.call(e[r], r, e[r])) break } else for (r in e) if (!1 === t.call(e[r], r, e[r])) break; return e }, trim: function (e) { return null == e ? "" : (e + "").replace(T, "") }, makeArray: function (e, t) { var n = t || []; return null != e && (C(Object(e)) ? w.merge(n, "string" == typeof e ? [e] : e) : s.call(n, e)), n }, inArray: function (e, t, n) { return null == t ? -1 : u.call(t, e, n) }, merge: function (e, t) { for (var n = +t.length, r = 0, i = e.length; r < n; r++)e[i++] = t[r]; return e.length = i, e }, grep: function (e, t, n) { for (var r, i = [], o = 0, a = e.length, s = !n; o < a; o++)(r = !t(e[o], o)) !== s && i.push(e[o]); return i }, map: function (e, t, n) { var r, i, o = 0, s = []; if (C(e)) for (r = e.length; o < r; o++)null != (i = t(e[o], o, n)) && s.push(i); else for (o in e) null != (i = t(e[o], o, n)) && s.push(i); return a.apply([], s) }, guid: 1, support: h }), "function" == typeof Symbol && (w.fn[Symbol.iterator] = n[Symbol.iterator]), w.each("Boolean Number String Function Array Date RegExp Object Error Symbol".split(" "), function (e, t) { l["[object " + t + "]"] = t.toLowerCase() }); function C(e) { var t = !!e && "length" in e && e.length, n = x(e); return !g(e) && !y(e) && ("array" === n || 0 === t || "number" == typeof t && t > 0 && t - 1 in e) } var E = function (e) { var t, n, r, i, o, a, s, u, l, c, f, p, d, h, g, y, v, m, x, b = "sizzle" + 1 * new Date, w = e.document, T = 0, C = 0, E = ae(), k = ae(), S = ae(), D = function (e, t) { return e === t && (f = !0), 0 }, N = {}.hasOwnProperty, A = [], j = A.pop, q = A.push, L = A.push, H = A.slice, O = function (e, t) { for (var n = 0, r = e.length; n < r; n++)if (e[n] === t) return n; return -1 }, P = "checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped", M = "[\\x20\\t\\r\\n\\f]", R = "(?:\\\\.|[\\w-]|[^\0-\\xa0])+", I = "\\[" + M + "*(" + R + ")(?:" + M + "*([*^$|!~]?=)" + M + "*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|(" + R + "))|)" + M + "*\\]", W = ":(" + R + ")(?:\\((('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|((?:\\\\.|[^\\\\()[\\]]|" + I + ")*)|.*)\\)|)", $ = new RegExp(M + "+", "g"), B = new RegExp("^" + M + "+|((?:^|[^\\\\])(?:\\\\.)*)" + M + "+$", "g"), F = new RegExp("^" + M + "*," + M + "*"), _ = new RegExp("^" + M + "*([>+~]|" + M + ")" + M + "*"), z = new RegExp("=" + M + "*([^\\]'\"]*?)" + M + "*\\]", "g"), X = new RegExp(W), U = new RegExp("^" + R + "$"), V = { ID: new RegExp("^#(" + R + ")"), CLASS: new RegExp("^\\.(" + R + ")"), TAG: new RegExp("^(" + R + "|[*])"), ATTR: new RegExp("^" + I), PSEUDO: new RegExp("^" + W), CHILD: new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\(" + M + "*(even|odd|(([+-]|)(\\d*)n|)" + M + "*(?:([+-]|)" + M + "*(\\d+)|))" + M + "*\\)|)", "i"), bool: new RegExp("^(?:" + P + ")$", "i"), needsContext: new RegExp("^" + M + "*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\(" + M + "*((?:-\\d)?\\d*)" + M + "*\\)|)(?=[^-]|$)", "i") }, G = /^(?:input|select|textarea|button)$/i, Y = /^h\d$/i, Q = /^[^{]+\{\s*\[native \w/, J = /^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/, K = /[+~]/, Z = new RegExp("\\\\([\\da-f]{1,6}" + M + "?|(" + M + ")|.)", "ig"), ee = function (e, t, n) { var r = "0x" + t - 65536; return r !== r || n ? t : r < 0 ? String.fromCharCode(r + 65536) : String.fromCharCode(r >> 10 | 55296, 1023 & r | 56320) }, te = /([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g, ne = function (e, t) { return t ? "\0" === e ? "\ufffd" : e.slice(0, -1) + "\\" + e.charCodeAt(e.length - 1).toString(16) + " " : "\\" + e }, re = function () { p() }, ie = me(function (e) { return !0 === e.disabled && ("form" in e || "label" in e) }, { dir: "parentNode", next: "legend" }); try { L.apply(A = H.call(w.childNodes), w.childNodes), A[w.childNodes.length].nodeType } catch (e) { L = { apply: A.length ? function (e, t) { q.apply(e, H.call(t)) } : function (e, t) { var n = e.length, r = 0; while (e[n++] = t[r++]); e.length = n - 1 } } } function oe(e, t, r, i) { var o, s, l, c, f, h, v, m = t && t.ownerDocument, T = t ? t.nodeType : 9; if (r = r || [], "string" != typeof e || !e || 1 !== T && 9 !== T && 11 !== T) return r; if (!i && ((t ? t.ownerDocument || t : w) !== d && p(t), t = t || d, g)) { if (11 !== T && (f = J.exec(e))) if (o = f[1]) { if (9 === T) { if (!(l = t.getElementById(o))) return r; if (l.id === o) return r.push(l), r } else if (m && (l = m.getElementById(o)) && x(t, l) && l.id === o) return r.push(l), r } else { if (f[2]) return L.apply(r, t.getElementsByTagName(e)), r; if ((o = f[3]) && n.getElementsByClassName && t.getElementsByClassName) return L.apply(r, t.getElementsByClassName(o)), r } if (n.qsa && !S[e + " "] && (!y || !y.test(e))) { if (1 !== T) m = t, v = e; else if ("object" !== t.nodeName.toLowerCase()) { (c = t.getAttribute("id")) ? c = c.replace(te, ne) : t.setAttribute("id", c = b), s = (h = a(e)).length; while (s--) h[s] = "#" + c + " " + ve(h[s]); v = h.join(","), m = K.test(e) && ge(t.parentNode) || t } if (v) try { return L.apply(r, m.querySelectorAll(v)), r } catch (e) { } finally { c === b && t.removeAttribute("id") } } } return u(e.replace(B, "$1"), t, r, i) } function ae() { var e = []; function t(n, i) { return e.push(n + " ") > r.cacheLength && delete t[e.shift()], t[n + " "] = i } return t } function se(e) { return e[b] = !0, e } function ue(e) { var t = d.createElement("fieldset"); try { return !!e(t) } catch (e) { return !1 } finally { t.parentNode && t.parentNode.removeChild(t), t = null } } function le(e, t) { var n = e.split("|"), i = n.length; while (i--) r.attrHandle[n[i]] = t } function ce(e, t) { var n = t && e, r = n && 1 === e.nodeType && 1 === t.nodeType && e.sourceIndex - t.sourceIndex; if (r) return r; if (n) while (n = n.nextSibling) if (n === t) return -1; return e ? 1 : -1 } function fe(e) { return function (t) { return "input" === t.nodeName.toLowerCase() && t.type === e } } function pe(e) { return function (t) { var n = t.nodeName.toLowerCase(); return ("input" === n || "button" === n) && t.type === e } } function de(e) { return function (t) { return "form" in t ? t.parentNode && !1 === t.disabled ? "label" in t ? "label" in t.parentNode ? t.parentNode.disabled === e : t.disabled === e : t.isDisabled === e || t.isDisabled !== !e && ie(t) === e : t.disabled === e : "label" in t && t.disabled === e } } function he(e) { return se(function (t) { return t = +t, se(function (n, r) { var i, o = e([], n.length, t), a = o.length; while (a--) n[i = o[a]] && (n[i] = !(r[i] = n[i])) }) }) } function ge(e) { return e && "undefined" != typeof e.getElementsByTagName && e } n = oe.support = {}, o = oe.isXML = function (e) { var t = e && (e.ownerDocument || e).documentElement; return !!t && "HTML" !== t.nodeName }, p = oe.setDocument = function (e) { var t, i, a = e ? e.ownerDocument || e : w; return a !== d && 9 === a.nodeType && a.documentElement ? (d = a, h = d.documentElement, g = !o(d), w !== d && (i = d.defaultView) && i.top !== i && (i.addEventListener ? i.addEventListener("unload", re, !1) : i.attachEvent && i.attachEvent("onunload", re)), n.attributes = ue(function (e) { return e.className = "i", !e.getAttribute("className") }), n.getElementsByTagName = ue(function (e) { return e.appendChild(d.createComment("")), !e.getElementsByTagName("*").length }), n.getElementsByClassName = Q.test(d.getElementsByClassName), n.getById = ue(function (e) { return h.appendChild(e).id = b, !d.getElementsByName || !d.getElementsByName(b).length }), n.getById ? (r.filter.ID = function (e) { var t = e.replace(Z, ee); return function (e) { return e.getAttribute("id") === t } }, r.find.ID = function (e, t) { if ("undefined" != typeof t.getElementById && g) { var n = t.getElementById(e); return n ? [n] : [] } }) : (r.filter.ID = function (e) { var t = e.replace(Z, ee); return function (e) { var n = "undefined" != typeof e.getAttributeNode && e.getAttributeNode("id"); return n && n.value === t } }, r.find.ID = function (e, t) { if ("undefined" != typeof t.getElementById && g) { var n, r, i, o = t.getElementById(e); if (o) { if ((n = o.getAttributeNode("id")) && n.value === e) return [o]; i = t.getElementsByName(e), r = 0; while (o = i[r++]) if ((n = o.getAttributeNode("id")) && n.value === e) return [o] } return [] } }), r.find.TAG = n.getElementsByTagName ? function (e, t) { return "undefined" != typeof t.getElementsByTagName ? t.getElementsByTagName(e) : n.qsa ? t.querySelectorAll(e) : void 0 } : function (e, t) { var n, r = [], i = 0, o = t.getElementsByTagName(e); if ("*" === e) { while (n = o[i++]) 1 === n.nodeType && r.push(n); return r } return o }, r.find.CLASS = n.getElementsByClassName && function (e, t) { if ("undefined" != typeof t.getElementsByClassName && g) return t.getElementsByClassName(e) }, v = [], y = [], (n.qsa = Q.test(d.querySelectorAll)) && (ue(function (e) { h.appendChild(e).innerHTML = "", e.querySelectorAll("[msallowcapture^='']").length && y.push("[*^$]=" + M + "*(?:''|\"\")"), e.querySelectorAll("[selected]").length || y.push("\\[" + M + "*(?:value|" + P + ")"), e.querySelectorAll("[id~=" + b + "-]").length || y.push("~="), e.querySelectorAll(":checked").length || y.push(":checked"), e.querySelectorAll("a#" + b + "+*").length || y.push(".#.+[+~]") }), ue(function (e) { e.innerHTML = ""; var t = d.createElement("input"); t.setAttribute("type", "hidden"), e.appendChild(t).setAttribute("name", "D"), e.querySelectorAll("[name=d]").length && y.push("name" + M + "*[*^$|!~]?="), 2 !== e.querySelectorAll(":enabled").length && y.push(":enabled", ":disabled"), h.appendChild(e).disabled = !0, 2 !== e.querySelectorAll(":disabled").length && y.push(":enabled", ":disabled"), e.querySelectorAll("*,:x"), y.push(",.*:") })), (n.matchesSelector = Q.test(m = h.matches || h.webkitMatchesSelector || h.mozMatchesSelector || h.oMatchesSelector || h.msMatchesSelector)) && ue(function (e) { n.disconnectedMatch = m.call(e, "*"), m.call(e, "[s!='']:x"), v.push("!=", W) }), y = y.length && new RegExp(y.join("|")), v = v.length && new RegExp(v.join("|")), t = Q.test(h.compareDocumentPosition), x = t || Q.test(h.contains) ? function (e, t) { var n = 9 === e.nodeType ? e.documentElement : e, r = t && t.parentNode; return e === r || !(!r || 1 !== r.nodeType || !(n.contains ? n.contains(r) : e.compareDocumentPosition && 16 & e.compareDocumentPosition(r))) } : function (e, t) { if (t) while (t = t.parentNode) if (t === e) return !0; return !1 }, D = t ? function (e, t) { if (e === t) return f = !0, 0; var r = !e.compareDocumentPosition - !t.compareDocumentPosition; return r || (1 & (r = (e.ownerDocument || e) === (t.ownerDocument || t) ? e.compareDocumentPosition(t) : 1) || !n.sortDetached && t.compareDocumentPosition(e) === r ? e === d || e.ownerDocument === w && x(w, e) ? -1 : t === d || t.ownerDocument === w && x(w, t) ? 1 : c ? O(c, e) - O(c, t) : 0 : 4 & r ? -1 : 1) } : function (e, t) { if (e === t) return f = !0, 0; var n, r = 0, i = e.parentNode, o = t.parentNode, a = [e], s = [t]; if (!i || !o) return e === d ? -1 : t === d ? 1 : i ? -1 : o ? 1 : c ? O(c, e) - O(c, t) : 0; if (i === o) return ce(e, t); n = e; while (n = n.parentNode) a.unshift(n); n = t; while (n = n.parentNode) s.unshift(n); while (a[r] === s[r]) r++; return r ? ce(a[r], s[r]) : a[r] === w ? -1 : s[r] === w ? 1 : 0 }, d) : d }, oe.matches = function (e, t) { return oe(e, null, null, t) }, oe.matchesSelector = function (e, t) { if ((e.ownerDocument || e) !== d && p(e), t = t.replace(z, "='$1']"), n.matchesSelector && g && !S[t + " "] && (!v || !v.test(t)) && (!y || !y.test(t))) try { var r = m.call(e, t); if (r || n.disconnectedMatch || e.document && 11 !== e.document.nodeType) return r } catch (e) { } return oe(t, d, null, [e]).length > 0 }, oe.contains = function (e, t) { return (e.ownerDocument || e) !== d && p(e), x(e, t) }, oe.attr = function (e, t) { (e.ownerDocument || e) !== d && p(e); var i = r.attrHandle[t.toLowerCase()], o = i && N.call(r.attrHandle, t.toLowerCase()) ? i(e, t, !g) : void 0; return void 0 !== o ? o : n.attributes || !g ? e.getAttribute(t) : (o = e.getAttributeNode(t)) && o.specified ? o.value : null }, oe.escape = function (e) { return (e + "").replace(te, ne) }, oe.error = function (e) { throw new Error("Syntax error, unrecognized expression: " + e) }, oe.uniqueSort = function (e) { var t, r = [], i = 0, o = 0; if (f = !n.detectDuplicates, c = !n.sortStable && e.slice(0), e.sort(D), f) { while (t = e[o++]) t === e[o] && (i = r.push(o)); while (i--) e.splice(r[i], 1) } return c = null, e }, i = oe.getText = function (e) { var t, n = "", r = 0, o = e.nodeType; if (o) { if (1 === o || 9 === o || 11 === o) { if ("string" == typeof e.textContent) return e.textContent; for (e = e.firstChild; e; e = e.nextSibling)n += i(e) } else if (3 === o || 4 === o) return e.nodeValue } else while (t = e[r++]) n += i(t); return n }, (r = oe.selectors = { cacheLength: 50, createPseudo: se, match: V, attrHandle: {}, find: {}, relative: { ">": { dir: "parentNode", first: !0 }, " ": { dir: "parentNode" }, "+": { dir: "previousSibling", first: !0 }, "~": { dir: "previousSibling" } }, preFilter: { ATTR: function (e) { return e[1] = e[1].replace(Z, ee), e[3] = (e[3] || e[4] || e[5] || "").replace(Z, ee), "~=" === e[2] && (e[3] = " " + e[3] + " "), e.slice(0, 4) }, CHILD: function (e) { return e[1] = e[1].toLowerCase(), "nth" === e[1].slice(0, 3) ? (e[3] || oe.error(e[0]), e[4] = +(e[4] ? e[5] + (e[6] || 1) : 2 * ("even" === e[3] || "odd" === e[3])), e[5] = +(e[7] + e[8] || "odd" === e[3])) : e[3] && oe.error(e[0]), e }, PSEUDO: function (e) { var t, n = !e[6] && e[2]; return V.CHILD.test(e[0]) ? null : (e[3] ? e[2] = e[4] || e[5] || "" : n && X.test(n) && (t = a(n, !0)) && (t = n.indexOf(")", n.length - t) - n.length) && (e[0] = e[0].slice(0, t), e[2] = n.slice(0, t)), e.slice(0, 3)) } }, filter: { TAG: function (e) { var t = e.replace(Z, ee).toLowerCase(); return "*" === e ? function () { return !0 } : function (e) { return e.nodeName && e.nodeName.toLowerCase() === t } }, CLASS: function (e) { var t = E[e + " "]; return t || (t = new RegExp("(^|" + M + ")" + e + "(" + M + "|$)")) && E(e, function (e) { return t.test("string" == typeof e.className && e.className || "undefined" != typeof e.getAttribute && e.getAttribute("class") || "") }) }, ATTR: function (e, t, n) { return function (r) { var i = oe.attr(r, e); return null == i ? "!=" === t : !t || (i += "", "=" === t ? i === n : "!=" === t ? i !== n : "^=" === t ? n && 0 === i.indexOf(n) : "*=" === t ? n && i.indexOf(n) > -1 : "$=" === t ? n && i.slice(-n.length) === n : "~=" === t ? (" " + i.replace($, " ") + " ").indexOf(n) > -1 : "|=" === t && (i === n || i.slice(0, n.length + 1) === n + "-")) } }, CHILD: function (e, t, n, r, i) { var o = "nth" !== e.slice(0, 3), a = "last" !== e.slice(-4), s = "of-type" === t; return 1 === r && 0 === i ? function (e) { return !!e.parentNode } : function (t, n, u) { var l, c, f, p, d, h, g = o !== a ? "nextSibling" : "previousSibling", y = t.parentNode, v = s && t.nodeName.toLowerCase(), m = !u && !s, x = !1; if (y) { if (o) { while (g) { p = t; while (p = p[g]) if (s ? p.nodeName.toLowerCase() === v : 1 === p.nodeType) return !1; h = g = "only" === e && !h && "nextSibling" } return !0 } if (h = [a ? y.firstChild : y.lastChild], a && m) { x = (d = (l = (c = (f = (p = y)[b] || (p[b] = {}))[p.uniqueID] || (f[p.uniqueID] = {}))[e] || [])[0] === T && l[1]) && l[2], p = d && y.childNodes[d]; while (p = ++d && p && p[g] || (x = d = 0) || h.pop()) if (1 === p.nodeType && ++x && p === t) { c[e] = [T, d, x]; break } } else if (m && (x = d = (l = (c = (f = (p = t)[b] || (p[b] = {}))[p.uniqueID] || (f[p.uniqueID] = {}))[e] || [])[0] === T && l[1]), !1 === x) while (p = ++d && p && p[g] || (x = d = 0) || h.pop()) if ((s ? p.nodeName.toLowerCase() === v : 1 === p.nodeType) && ++x && (m && ((c = (f = p[b] || (p[b] = {}))[p.uniqueID] || (f[p.uniqueID] = {}))[e] = [T, x]), p === t)) break; return (x -= i) === r || x % r == 0 && x / r >= 0 } } }, PSEUDO: function (e, t) { var n, i = r.pseudos[e] || r.setFilters[e.toLowerCase()] || oe.error("unsupported pseudo: " + e); return i[b] ? i(t) : i.length > 1 ? (n = [e, e, "", t], r.setFilters.hasOwnProperty(e.toLowerCase()) ? se(function (e, n) { var r, o = i(e, t), a = o.length; while (a--) e[r = O(e, o[a])] = !(n[r] = o[a]) }) : function (e) { return i(e, 0, n) }) : i } }, pseudos: { not: se(function (e) { var t = [], n = [], r = s(e.replace(B, "$1")); return r[b] ? se(function (e, t, n, i) { var o, a = r(e, null, i, []), s = e.length; while (s--) (o = a[s]) && (e[s] = !(t[s] = o)) }) : function (e, i, o) { return t[0] = e, r(t, null, o, n), t[0] = null, !n.pop() } }), has: se(function (e) { return function (t) { return oe(e, t).length > 0 } }), contains: se(function (e) { return e = e.replace(Z, ee), function (t) { return (t.textContent || t.innerText || i(t)).indexOf(e) > -1 } }), lang: se(function (e) { return U.test(e || "") || oe.error("unsupported lang: " + e), e = e.replace(Z, ee).toLowerCase(), function (t) { var n; do { if (n = g ? t.lang : t.getAttribute("xml:lang") || t.getAttribute("lang")) return (n = n.toLowerCase()) === e || 0 === n.indexOf(e + "-") } while ((t = t.parentNode) && 1 === t.nodeType); return !1 } }), target: function (t) { var n = e.location && e.location.hash; return n && n.slice(1) === t.id }, root: function (e) { return e === h }, focus: function (e) { return e === d.activeElement && (!d.hasFocus || d.hasFocus()) && !!(e.type || e.href || ~e.tabIndex) }, enabled: de(!1), disabled: de(!0), checked: function (e) { var t = e.nodeName.toLowerCase(); return "input" === t && !!e.checked || "option" === t && !!e.selected }, selected: function (e) { return e.parentNode && e.parentNode.selectedIndex, !0 === e.selected }, empty: function (e) { for (e = e.firstChild; e; e = e.nextSibling)if (e.nodeType < 6) return !1; return !0 }, parent: function (e) { return !r.pseudos.empty(e) }, header: function (e) { return Y.test(e.nodeName) }, input: function (e) { return G.test(e.nodeName) }, button: function (e) { var t = e.nodeName.toLowerCase(); return "input" === t && "button" === e.type || "button" === t }, text: function (e) { var t; return "input" === e.nodeName.toLowerCase() && "text" === e.type && (null == (t = e.getAttribute("type")) || "text" === t.toLowerCase()) }, first: he(function () { return [0] }), last: he(function (e, t) { return [t - 1] }), eq: he(function (e, t, n) { return [n < 0 ? n + t : n] }), even: he(function (e, t) { for (var n = 0; n < t; n += 2)e.push(n); return e }), odd: he(function (e, t) { for (var n = 1; n < t; n += 2)e.push(n); return e }), lt: he(function (e, t, n) { for (var r = n < 0 ? n + t : n; --r >= 0;)e.push(r); return e }), gt: he(function (e, t, n) { for (var r = n < 0 ? n + t : n; ++r < t;)e.push(r); return e }) } }).pseudos.nth = r.pseudos.eq; for (t in { radio: !0, checkbox: !0, file: !0, password: !0, image: !0 }) r.pseudos[t] = fe(t); for (t in { submit: !0, reset: !0 }) r.pseudos[t] = pe(t); function ye() { } ye.prototype = r.filters = r.pseudos, r.setFilters = new ye, a = oe.tokenize = function (e, t) { var n, i, o, a, s, u, l, c = k[e + " "]; if (c) return t ? 0 : c.slice(0); s = e, u = [], l = r.preFilter; while (s) { n && !(i = F.exec(s)) || (i && (s = s.slice(i[0].length) || s), u.push(o = [])), n = !1, (i = _.exec(s)) && (n = i.shift(), o.push({ value: n, type: i[0].replace(B, " ") }), s = s.slice(n.length)); for (a in r.filter) !(i = V[a].exec(s)) || l[a] && !(i = l[a](i)) || (n = i.shift(), o.push({ value: n, type: a, matches: i }), s = s.slice(n.length)); if (!n) break } return t ? s.length : s ? oe.error(e) : k(e, u).slice(0) }; function ve(e) { for (var t = 0, n = e.length, r = ""; t < n; t++)r += e[t].value; return r } function me(e, t, n) { var r = t.dir, i = t.next, o = i || r, a = n && "parentNode" === o, s = C++; return t.first ? function (t, n, i) { while (t = t[r]) if (1 === t.nodeType || a) return e(t, n, i); return !1 } : function (t, n, u) { var l, c, f, p = [T, s]; if (u) { while (t = t[r]) if ((1 === t.nodeType || a) && e(t, n, u)) return !0 } else while (t = t[r]) if (1 === t.nodeType || a) if (f = t[b] || (t[b] = {}), c = f[t.uniqueID] || (f[t.uniqueID] = {}), i && i === t.nodeName.toLowerCase()) t = t[r] || t; else { if ((l = c[o]) && l[0] === T && l[1] === s) return p[2] = l[2]; if (c[o] = p, p[2] = e(t, n, u)) return !0 } return !1 } } function xe(e) { return e.length > 1 ? function (t, n, r) { var i = e.length; while (i--) if (!e[i](t, n, r)) return !1; return !0 } : e[0] } function be(e, t, n) { for (var r = 0, i = t.length; r < i; r++)oe(e, t[r], n); return n } function we(e, t, n, r, i) { for (var o, a = [], s = 0, u = e.length, l = null != t; s < u; s++)(o = e[s]) && (n && !n(o, r, i) || (a.push(o), l && t.push(s))); return a } function Te(e, t, n, r, i, o) { return r && !r[b] && (r = Te(r)), i && !i[b] && (i = Te(i, o)), se(function (o, a, s, u) { var l, c, f, p = [], d = [], h = a.length, g = o || be(t || "*", s.nodeType ? [s] : s, []), y = !e || !o && t ? g : we(g, p, e, s, u), v = n ? i || (o ? e : h || r) ? [] : a : y; if (n && n(y, v, s, u), r) { l = we(v, d), r(l, [], s, u), c = l.length; while (c--) (f = l[c]) && (v[d[c]] = !(y[d[c]] = f)) } if (o) { if (i || e) { if (i) { l = [], c = v.length; while (c--) (f = v[c]) && l.push(y[c] = f); i(null, v = [], l, u) } c = v.length; while (c--) (f = v[c]) && (l = i ? O(o, f) : p[c]) > -1 && (o[l] = !(a[l] = f)) } } else v = we(v === a ? v.splice(h, v.length) : v), i ? i(null, a, v, u) : L.apply(a, v) }) } function Ce(e) { for (var t, n, i, o = e.length, a = r.relative[e[0].type], s = a || r.relative[" "], u = a ? 1 : 0, c = me(function (e) { return e === t }, s, !0), f = me(function (e) { return O(t, e) > -1 }, s, !0), p = [function (e, n, r) { var i = !a && (r || n !== l) || ((t = n).nodeType ? c(e, n, r) : f(e, n, r)); return t = null, i }]; u < o; u++)if (n = r.relative[e[u].type]) p = [me(xe(p), n)]; else { if ((n = r.filter[e[u].type].apply(null, e[u].matches))[b]) { for (i = ++u; i < o; i++)if (r.relative[e[i].type]) break; return Te(u > 1 && xe(p), u > 1 && ve(e.slice(0, u - 1).concat({ value: " " === e[u - 2].type ? "*" : "" })).replace(B, "$1"), n, u < i && Ce(e.slice(u, i)), i < o && Ce(e = e.slice(i)), i < o && ve(e)) } p.push(n) } return xe(p) } function Ee(e, t) { var n = t.length > 0, i = e.length > 0, o = function (o, a, s, u, c) { var f, h, y, v = 0, m = "0", x = o && [], b = [], w = l, C = o || i && r.find.TAG("*", c), E = T += null == w ? 1 : Math.random() || .1, k = C.length; for (c && (l = a === d || a || c); m !== k && null != (f = C[m]); m++) { if (i && f) { h = 0, a || f.ownerDocument === d || (p(f), s = !g); while (y = e[h++]) if (y(f, a || d, s)) { u.push(f); break } c && (T = E) } n && ((f = !y && f) && v-- , o && x.push(f)) } if (v += m, n && m !== v) { h = 0; while (y = t[h++]) y(x, b, a, s); if (o) { if (v > 0) while (m--) x[m] || b[m] || (b[m] = j.call(u)); b = we(b) } L.apply(u, b), c && !o && b.length > 0 && v + t.length > 1 && oe.uniqueSort(u) } return c && (T = E, l = w), x }; return n ? se(o) : o } return s = oe.compile = function (e, t) { var n, r = [], i = [], o = S[e + " "]; if (!o) { t || (t = a(e)), n = t.length; while (n--) (o = Ce(t[n]))[b] ? r.push(o) : i.push(o); (o = S(e, Ee(i, r))).selector = e } return o }, u = oe.select = function (e, t, n, i) { var o, u, l, c, f, p = "function" == typeof e && e, d = !i && a(e = p.selector || e); if (n = n || [], 1 === d.length) { if ((u = d[0] = d[0].slice(0)).length > 2 && "ID" === (l = u[0]).type && 9 === t.nodeType && g && r.relative[u[1].type]) { if (!(t = (r.find.ID(l.matches[0].replace(Z, ee), t) || [])[0])) return n; p && (t = t.parentNode), e = e.slice(u.shift().value.length) } o = V.needsContext.test(e) ? 0 : u.length; while (o--) { if (l = u[o], r.relative[c = l.type]) break; if ((f = r.find[c]) && (i = f(l.matches[0].replace(Z, ee), K.test(u[0].type) && ge(t.parentNode) || t))) { if (u.splice(o, 1), !(e = i.length && ve(u))) return L.apply(n, i), n; break } } } return (p || s(e, d))(i, t, !g, n, !t || K.test(e) && ge(t.parentNode) || t), n }, n.sortStable = b.split("").sort(D).join("") === b, n.detectDuplicates = !!f, p(), n.sortDetached = ue(function (e) { return 1 & e.compareDocumentPosition(d.createElement("fieldset")) }), ue(function (e) { return e.innerHTML = "", "#" === e.firstChild.getAttribute("href") }) || le("type|href|height|width", function (e, t, n) { if (!n) return e.getAttribute(t, "type" === t.toLowerCase() ? 1 : 2) }), n.attributes && ue(function (e) { return e.innerHTML = "", e.firstChild.setAttribute("value", ""), "" === e.firstChild.getAttribute("value") }) || le("value", function (e, t, n) { if (!n && "input" === e.nodeName.toLowerCase()) return e.defaultValue }), ue(function (e) { return null == e.getAttribute("disabled") }) || le(P, function (e, t, n) { var r; if (!n) return !0 === e[t] ? t.toLowerCase() : (r = e.getAttributeNode(t)) && r.specified ? r.value : null }), oe }(e); w.find = E, w.expr = E.selectors, w.expr[":"] = w.expr.pseudos, w.uniqueSort = w.unique = E.uniqueSort, w.text = E.getText, w.isXMLDoc = E.isXML, w.contains = E.contains, w.escapeSelector = E.escape; var k = function (e, t, n) { var r = [], i = void 0 !== n; while ((e = e[t]) && 9 !== e.nodeType) if (1 === e.nodeType) { if (i && w(e).is(n)) break; r.push(e) } return r }, S = function (e, t) { for (var n = []; e; e = e.nextSibling)1 === e.nodeType && e !== t && n.push(e); return n }, D = w.expr.match.needsContext; function N(e, t) { return e.nodeName && e.nodeName.toLowerCase() === t.toLowerCase() } var A = /^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i; function j(e, t, n) { return g(t) ? w.grep(e, function (e, r) { return !!t.call(e, r, e) !== n }) : t.nodeType ? w.grep(e, function (e) { return e === t !== n }) : "string" != typeof t ? w.grep(e, function (e) { return u.call(t, e) > -1 !== n }) : w.filter(t, e, n) } w.filter = function (e, t, n) { var r = t[0]; return n && (e = ":not(" + e + ")"), 1 === t.length && 1 === r.nodeType ? w.find.matchesSelector(r, e) ? [r] : [] : w.find.matches(e, w.grep(t, function (e) { return 1 === e.nodeType })) }, w.fn.extend({ find: function (e) { var t, n, r = this.length, i = this; if ("string" != typeof e) return this.pushStack(w(e).filter(function () { for (t = 0; t < r; t++)if (w.contains(i[t], this)) return !0 })); for (n = this.pushStack([]), t = 0; t < r; t++)w.find(e, i[t], n); return r > 1 ? w.uniqueSort(n) : n }, filter: function (e) { return this.pushStack(j(this, e || [], !1)) }, not: function (e) { return this.pushStack(j(this, e || [], !0)) }, is: function (e) { return !!j(this, "string" == typeof e && D.test(e) ? w(e) : e || [], !1).length } }); var q, L = /^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/; (w.fn.init = function (e, t, n) { var i, o; if (!e) return this; if (n = n || q, "string" == typeof e) { if (!(i = "<" === e[0] && ">" === e[e.length - 1] && e.length >= 3 ? [null, e, null] : L.exec(e)) || !i[1] && t) return !t || t.jquery ? (t || n).find(e) : this.constructor(t).find(e); if (i[1]) { if (t = t instanceof w ? t[0] : t, w.merge(this, w.parseHTML(i[1], t && t.nodeType ? t.ownerDocument || t : r, !0)), A.test(i[1]) && w.isPlainObject(t)) for (i in t) g(this[i]) ? this[i](t[i]) : this.attr(i, t[i]); return this } return (o = r.getElementById(i[2])) && (this[0] = o, this.length = 1), this } return e.nodeType ? (this[0] = e, this.length = 1, this) : g(e) ? void 0 !== n.ready ? n.ready(e) : e(w) : w.makeArray(e, this) }).prototype = w.fn, q = w(r); var H = /^(?:parents|prev(?:Until|All))/, O = { children: !0, contents: !0, next: !0, prev: !0 }; w.fn.extend({ has: function (e) { var t = w(e, this), n = t.length; return this.filter(function () { for (var e = 0; e < n; e++)if (w.contains(this, t[e])) return !0 }) }, closest: function (e, t) { var n, r = 0, i = this.length, o = [], a = "string" != typeof e && w(e); if (!D.test(e)) for (; r < i; r++)for (n = this[r]; n && n !== t; n = n.parentNode)if (n.nodeType < 11 && (a ? a.index(n) > -1 : 1 === n.nodeType && w.find.matchesSelector(n, e))) { o.push(n); break } return this.pushStack(o.length > 1 ? w.uniqueSort(o) : o) }, index: function (e) { return e ? "string" == typeof e ? u.call(w(e), this[0]) : u.call(this, e.jquery ? e[0] : e) : this[0] && this[0].parentNode ? this.first().prevAll().length : -1 }, add: function (e, t) { return this.pushStack(w.uniqueSort(w.merge(this.get(), w(e, t)))) }, addBack: function (e) { return this.add(null == e ? this.prevObject : this.prevObject.filter(e)) } }); function P(e, t) { while ((e = e[t]) && 1 !== e.nodeType); return e } w.each({ parent: function (e) { var t = e.parentNode; return t && 11 !== t.nodeType ? t : null }, parents: function (e) { return k(e, "parentNode") }, parentsUntil: function (e, t, n) { return k(e, "parentNode", n) }, next: function (e) { return P(e, "nextSibling") }, prev: function (e) { return P(e, "previousSibling") }, nextAll: function (e) { return k(e, "nextSibling") }, prevAll: function (e) { return k(e, "previousSibling") }, nextUntil: function (e, t, n) { return k(e, "nextSibling", n) }, prevUntil: function (e, t, n) { return k(e, "previousSibling", n) }, siblings: function (e) { return S((e.parentNode || {}).firstChild, e) }, children: function (e) { return S(e.firstChild) }, contents: function (e) { return N(e, "iframe") ? e.contentDocument : (N(e, "template") && (e = e.content || e), w.merge([], e.childNodes)) } }, function (e, t) { w.fn[e] = function (n, r) { var i = w.map(this, t, n); return "Until" !== e.slice(-5) && (r = n), r && "string" == typeof r && (i = w.filter(r, i)), this.length > 1 && (O[e] || w.uniqueSort(i), H.test(e) && i.reverse()), this.pushStack(i) } }); var M = /[^\x20\t\r\n\f]+/g; function R(e) { var t = {}; return w.each(e.match(M) || [], function (e, n) { t[n] = !0 }), t } w.Callbacks = function (e) { e = "string" == typeof e ? R(e) : w.extend({}, e); var t, n, r, i, o = [], a = [], s = -1, u = function () { for (i = i || e.once, r = t = !0; a.length; s = -1) { n = a.shift(); while (++s < o.length) !1 === o[s].apply(n[0], n[1]) && e.stopOnFalse && (s = o.length, n = !1) } e.memory || (n = !1), t = !1, i && (o = n ? [] : "") }, l = { add: function () { return o && (n && !t && (s = o.length - 1, a.push(n)), function t(n) { w.each(n, function (n, r) { g(r) ? e.unique && l.has(r) || o.push(r) : r && r.length && "string" !== x(r) && t(r) }) }(arguments), n && !t && u()), this }, remove: function () { return w.each(arguments, function (e, t) { var n; while ((n = w.inArray(t, o, n)) > -1) o.splice(n, 1), n <= s && s-- }), this }, has: function (e) { return e ? w.inArray(e, o) > -1 : o.length > 0 }, empty: function () { return o && (o = []), this }, disable: function () { return i = a = [], o = n = "", this }, disabled: function () { return !o }, lock: function () { return i = a = [], n || t || (o = n = ""), this }, locked: function () { return !!i }, fireWith: function (e, n) { return i || (n = [e, (n = n || []).slice ? n.slice() : n], a.push(n), t || u()), this }, fire: function () { return l.fireWith(this, arguments), this }, fired: function () { return !!r } }; return l }; function I(e) { return e } function W(e) { throw e } function $(e, t, n, r) { var i; try { e && g(i = e.promise) ? i.call(e).done(t).fail(n) : e && g(i = e.then) ? i.call(e, t, n) : t.apply(void 0, [e].slice(r)) } catch (e) { n.apply(void 0, [e]) } } w.extend({ Deferred: function (t) { var n = [["notify", "progress", w.Callbacks("memory"), w.Callbacks("memory"), 2], ["resolve", "done", w.Callbacks("once memory"), w.Callbacks("once memory"), 0, "resolved"], ["reject", "fail", w.Callbacks("once memory"), w.Callbacks("once memory"), 1, "rejected"]], r = "pending", i = { state: function () { return r }, always: function () { return o.done(arguments).fail(arguments), this }, "catch": function (e) { return i.then(null, e) }, pipe: function () { var e = arguments; return w.Deferred(function (t) { w.each(n, function (n, r) { var i = g(e[r[4]]) && e[r[4]]; o[r[1]](function () { var e = i && i.apply(this, arguments); e && g(e.promise) ? e.promise().progress(t.notify).done(t.resolve).fail(t.reject) : t[r[0] + "With"](this, i ? [e] : arguments) }) }), e = null }).promise() }, then: function (t, r, i) { var o = 0; function a(t, n, r, i) { return function () { var s = this, u = arguments, l = function () { var e, l; if (!(t < o)) { if ((e = r.apply(s, u)) === n.promise()) throw new TypeError("Thenable self-resolution"); l = e && ("object" == typeof e || "function" == typeof e) && e.then, g(l) ? i ? l.call(e, a(o, n, I, i), a(o, n, W, i)) : (o++ , l.call(e, a(o, n, I, i), a(o, n, W, i), a(o, n, I, n.notifyWith))) : (r !== I && (s = void 0, u = [e]), (i || n.resolveWith)(s, u)) } }, c = i ? l : function () { try { l() } catch (e) { w.Deferred.exceptionHook && w.Deferred.exceptionHook(e, c.stackTrace), t + 1 >= o && (r !== W && (s = void 0, u = [e]), n.rejectWith(s, u)) } }; t ? c() : (w.Deferred.getStackHook && (c.stackTrace = w.Deferred.getStackHook()), e.setTimeout(c)) } } return w.Deferred(function (e) { n[0][3].add(a(0, e, g(i) ? i : I, e.notifyWith)), n[1][3].add(a(0, e, g(t) ? t : I)), n[2][3].add(a(0, e, g(r) ? r : W)) }).promise() }, promise: function (e) { return null != e ? w.extend(e, i) : i } }, o = {}; return w.each(n, function (e, t) { var a = t[2], s = t[5]; i[t[1]] = a.add, s && a.add(function () { r = s }, n[3 - e][2].disable, n[3 - e][3].disable, n[0][2].lock, n[0][3].lock), a.add(t[3].fire), o[t[0]] = function () { return o[t[0] + "With"](this === o ? void 0 : this, arguments), this }, o[t[0] + "With"] = a.fireWith }), i.promise(o), t && t.call(o, o), o }, when: function (e) { var t = arguments.length, n = t, r = Array(n), i = o.call(arguments), a = w.Deferred(), s = function (e) { return function (n) { r[e] = this, i[e] = arguments.length > 1 ? o.call(arguments) : n, --t || a.resolveWith(r, i) } }; if (t <= 1 && ($(e, a.done(s(n)).resolve, a.reject, !t), "pending" === a.state() || g(i[n] && i[n].then))) return a.then(); while (n--) $(i[n], s(n), a.reject); return a.promise() } }); var B = /^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/; w.Deferred.exceptionHook = function (t, n) { e.console && e.console.warn && t && B.test(t.name) && e.console.warn("jQuery.Deferred exception: " + t.message, t.stack, n) }, w.readyException = function (t) { e.setTimeout(function () { throw t }) }; var F = w.Deferred(); w.fn.ready = function (e) { return F.then(e)["catch"](function (e) { w.readyException(e) }), this }, w.extend({ isReady: !1, readyWait: 1, ready: function (e) { (!0 === e ? --w.readyWait : w.isReady) || (w.isReady = !0, !0 !== e && --w.readyWait > 0 || F.resolveWith(r, [w])) } }), w.ready.then = F.then; function _() { r.removeEventListener("DOMContentLoaded", _), e.removeEventListener("load", _), w.ready() } "complete" === r.readyState || "loading" !== r.readyState && !r.documentElement.doScroll ? e.setTimeout(w.ready) : (r.addEventListener("DOMContentLoaded", _), e.addEventListener("load", _)); var z = function (e, t, n, r, i, o, a) { var s = 0, u = e.length, l = null == n; if ("object" === x(n)) { i = !0; for (s in n) z(e, t, s, n[s], !0, o, a) } else if (void 0 !== r && (i = !0, g(r) || (a = !0), l && (a ? (t.call(e, r), t = null) : (l = t, t = function (e, t, n) { return l.call(w(e), n) })), t)) for (; s < u; s++)t(e[s], n, a ? r : r.call(e[s], s, t(e[s], n))); return i ? e : l ? t.call(e) : u ? t(e[0], n) : o }, X = /^-ms-/, U = /-([a-z])/g; function V(e, t) { return t.toUpperCase() } function G(e) { return e.replace(X, "ms-").replace(U, V) } var Y = function (e) { return 1 === e.nodeType || 9 === e.nodeType || !+e.nodeType }; function Q() { this.expando = w.expando + Q.uid++ } Q.uid = 1, Q.prototype = { cache: function (e) { var t = e[this.expando]; return t || (t = {}, Y(e) && (e.nodeType ? e[this.expando] = t : Object.defineProperty(e, this.expando, { value: t, configurable: !0 }))), t }, set: function (e, t, n) { var r, i = this.cache(e); if ("string" == typeof t) i[G(t)] = n; else for (r in t) i[G(r)] = t[r]; return i }, get: function (e, t) { return void 0 === t ? this.cache(e) : e[this.expando] && e[this.expando][G(t)] }, access: function (e, t, n) { return void 0 === t || t && "string" == typeof t && void 0 === n ? this.get(e, t) : (this.set(e, t, n), void 0 !== n ? n : t) }, remove: function (e, t) { var n, r = e[this.expando]; if (void 0 !== r) { if (void 0 !== t) { n = (t = Array.isArray(t) ? t.map(G) : (t = G(t)) in r ? [t] : t.match(M) || []).length; while (n--) delete r[t[n]] } (void 0 === t || w.isEmptyObject(r)) && (e.nodeType ? e[this.expando] = void 0 : delete e[this.expando]) } }, hasData: function (e) { var t = e[this.expando]; return void 0 !== t && !w.isEmptyObject(t) } }; var J = new Q, K = new Q, Z = /^(?:\{[\w\W]*\}|\[[\w\W]*\])$/, ee = /[A-Z]/g; function te(e) { return "true" === e || "false" !== e && ("null" === e ? null : e === +e + "" ? +e : Z.test(e) ? JSON.parse(e) : e) } function ne(e, t, n) { var r; if (void 0 === n && 1 === e.nodeType) if (r = "data-" + t.replace(ee, "-$&").toLowerCase(), "string" == typeof (n = e.getAttribute(r))) { try { n = te(n) } catch (e) { } K.set(e, t, n) } else n = void 0; return n } w.extend({ hasData: function (e) { return K.hasData(e) || J.hasData(e) }, data: function (e, t, n) { return K.access(e, t, n) }, removeData: function (e, t) { K.remove(e, t) }, _data: function (e, t, n) { return J.access(e, t, n) }, _removeData: function (e, t) { J.remove(e, t) } }), w.fn.extend({ data: function (e, t) { var n, r, i, o = this[0], a = o && o.attributes; if (void 0 === e) { if (this.length && (i = K.get(o), 1 === o.nodeType && !J.get(o, "hasDataAttrs"))) { n = a.length; while (n--) a[n] && 0 === (r = a[n].name).indexOf("data-") && (r = G(r.slice(5)), ne(o, r, i[r])); J.set(o, "hasDataAttrs", !0) } return i } return "object" == typeof e ? this.each(function () { K.set(this, e) }) : z(this, function (t) { var n; if (o && void 0 === t) { if (void 0 !== (n = K.get(o, e))) return n; if (void 0 !== (n = ne(o, e))) return n } else this.each(function () { K.set(this, e, t) }) }, null, t, arguments.length > 1, null, !0) }, removeData: function (e) { return this.each(function () { K.remove(this, e) }) } }), w.extend({ queue: function (e, t, n) { var r; if (e) return t = (t || "fx") + "queue", r = J.get(e, t), n && (!r || Array.isArray(n) ? r = J.access(e, t, w.makeArray(n)) : r.push(n)), r || [] }, dequeue: function (e, t) { t = t || "fx"; var n = w.queue(e, t), r = n.length, i = n.shift(), o = w._queueHooks(e, t), a = function () { w.dequeue(e, t) }; "inprogress" === i && (i = n.shift(), r--), i && ("fx" === t && n.unshift("inprogress"), delete o.stop, i.call(e, a, o)), !r && o && o.empty.fire() }, _queueHooks: function (e, t) { var n = t + "queueHooks"; return J.get(e, n) || J.access(e, n, { empty: w.Callbacks("once memory").add(function () { J.remove(e, [t + "queue", n]) }) }) } }), w.fn.extend({ queue: function (e, t) { var n = 2; return "string" != typeof e && (t = e, e = "fx", n--), arguments.length < n ? w.queue(this[0], e) : void 0 === t ? this : this.each(function () { var n = w.queue(this, e, t); w._queueHooks(this, e), "fx" === e && "inprogress" !== n[0] && w.dequeue(this, e) }) }, dequeue: function (e) { return this.each(function () { w.dequeue(this, e) }) }, clearQueue: function (e) { return this.queue(e || "fx", []) }, promise: function (e, t) { var n, r = 1, i = w.Deferred(), o = this, a = this.length, s = function () { --r || i.resolveWith(o, [o]) }; "string" != typeof e && (t = e, e = void 0), e = e || "fx"; while (a--) (n = J.get(o[a], e + "queueHooks")) && n.empty && (r++ , n.empty.add(s)); return s(), i.promise(t) } }); var re = /[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/.source, ie = new RegExp("^(?:([+-])=|)(" + re + ")([a-z%]*)$", "i"), oe = ["Top", "Right", "Bottom", "Left"], ae = function (e, t) { return "none" === (e = t || e).style.display || "" === e.style.display && w.contains(e.ownerDocument, e) && "none" === w.css(e, "display") }, se = function (e, t, n, r) { var i, o, a = {}; for (o in t) a[o] = e.style[o], e.style[o] = t[o]; i = n.apply(e, r || []); for (o in t) e.style[o] = a[o]; return i }; function ue(e, t, n, r) { var i, o, a = 20, s = r ? function () { return r.cur() } : function () { return w.css(e, t, "") }, u = s(), l = n && n[3] || (w.cssNumber[t] ? "" : "px"), c = (w.cssNumber[t] || "px" !== l && +u) && ie.exec(w.css(e, t)); if (c && c[3] !== l) { u /= 2, l = l || c[3], c = +u || 1; while (a--) w.style(e, t, c + l), (1 - o) * (1 - (o = s() / u || .5)) <= 0 && (a = 0), c /= o; c *= 2, w.style(e, t, c + l), n = n || [] } return n && (c = +c || +u || 0, i = n[1] ? c + (n[1] + 1) * n[2] : +n[2], r && (r.unit = l, r.start = c, r.end = i)), i } var le = {}; function ce(e) { var t, n = e.ownerDocument, r = e.nodeName, i = le[r]; return i || (t = n.body.appendChild(n.createElement(r)), i = w.css(t, "display"), t.parentNode.removeChild(t), "none" === i && (i = "block"), le[r] = i, i) } function fe(e, t) { for (var n, r, i = [], o = 0, a = e.length; o < a; o++)(r = e[o]).style && (n = r.style.display, t ? ("none" === n && (i[o] = J.get(r, "display") || null, i[o] || (r.style.display = "")), "" === r.style.display && ae(r) && (i[o] = ce(r))) : "none" !== n && (i[o] = "none", J.set(r, "display", n))); for (o = 0; o < a; o++)null != i[o] && (e[o].style.display = i[o]); return e } w.fn.extend({ show: function () { return fe(this, !0) }, hide: function () { return fe(this) }, toggle: function (e) { return "boolean" == typeof e ? e ? this.show() : this.hide() : this.each(function () { ae(this) ? w(this).show() : w(this).hide() }) } }); var pe = /^(?:checkbox|radio)$/i, de = /<([a-z][^\/\0>\x20\t\r\n\f]+)/i, he = /^$|^module$|\/(?:java|ecma)script/i, ge = { option: [1, ""], thead: [1, "
", "
"], col: [2, "
", "
"], tr: [2, "
", "
"], td: [3, "
", "
"], _default: [0, "", ""] }; ge.optgroup = ge.option, ge.tbody = ge.tfoot = ge.colgroup = ge.caption = ge.thead, ge.th = ge.td; function ye(e, t) { var n; return n = "undefined" != typeof e.getElementsByTagName ? e.getElementsByTagName(t || "*") : "undefined" != typeof e.querySelectorAll ? e.querySelectorAll(t || "*") : [], void 0 === t || t && N(e, t) ? w.merge([e], n) : n } function ve(e, t) { for (var n = 0, r = e.length; n < r; n++)J.set(e[n], "globalEval", !t || J.get(t[n], "globalEval")) } var me = /<|?\w+;/; function xe(e, t, n, r, i) { for (var o, a, s, u, l, c, f = t.createDocumentFragment(), p = [], d = 0, h = e.length; d < h; d++)if ((o = e[d]) || 0 === o) if ("object" === x(o)) w.merge(p, o.nodeType ? [o] : o); else if (me.test(o)) { a = a || f.appendChild(t.createElement("div")), s = (de.exec(o) || ["", ""])[1].toLowerCase(), u = ge[s] || ge._default, a.innerHTML = u[1] + w.htmlPrefilter(o) + u[2], c = u[0]; while (c--) a = a.lastChild; w.merge(p, a.childNodes), (a = f.firstChild).textContent = "" } else p.push(t.createTextNode(o)); f.textContent = "", d = 0; while (o = p[d++]) if (r && w.inArray(o, r) > -1) i && i.push(o); else if (l = w.contains(o.ownerDocument, o), a = ye(f.appendChild(o), "script"), l && ve(a), n) { c = 0; while (o = a[c++]) he.test(o.type || "") && n.push(o) } return f } !function () { var e = r.createDocumentFragment().appendChild(r.createElement("div")), t = r.createElement("input"); t.setAttribute("type", "radio"), t.setAttribute("checked", "checked"), t.setAttribute("name", "t"), e.appendChild(t), h.checkClone = e.cloneNode(!0).cloneNode(!0).lastChild.checked, e.innerHTML = "", h.noCloneChecked = !!e.cloneNode(!0).lastChild.defaultValue }(); var be = r.documentElement, we = /^key/, Te = /^(?:mouse|pointer|contextmenu|drag|drop)|click/, Ce = /^([^.]*)(?:\.(.+)|)/; function Ee() { return !0 } function ke() { return !1 } function Se() { try { return r.activeElement } catch (e) { } } function De(e, t, n, r, i, o) { var a, s; if ("object" == typeof t) { "string" != typeof n && (r = r || n, n = void 0); for (s in t) De(e, s, n, r, t[s], o); return e } if (null == r && null == i ? (i = n, r = n = void 0) : null == i && ("string" == typeof n ? (i = r, r = void 0) : (i = r, r = n, n = void 0)), !1 === i) i = ke; else if (!i) return e; return 1 === o && (a = i, (i = function (e) { return w().off(e), a.apply(this, arguments) }).guid = a.guid || (a.guid = w.guid++)), e.each(function () { w.event.add(this, t, i, r, n) }) } w.event = { global: {}, add: function (e, t, n, r, i) { var o, a, s, u, l, c, f, p, d, h, g, y = J.get(e); if (y) { n.handler && (n = (o = n).handler, i = o.selector), i && w.find.matchesSelector(be, i), n.guid || (n.guid = w.guid++), (u = y.events) || (u = y.events = {}), (a = y.handle) || (a = y.handle = function (t) { return "undefined" != typeof w && w.event.triggered !== t.type ? w.event.dispatch.apply(e, arguments) : void 0 }), l = (t = (t || "").match(M) || [""]).length; while (l--) d = g = (s = Ce.exec(t[l]) || [])[1], h = (s[2] || "").split(".").sort(), d && (f = w.event.special[d] || {}, d = (i ? f.delegateType : f.bindType) || d, f = w.event.special[d] || {}, c = w.extend({ type: d, origType: g, data: r, handler: n, guid: n.guid, selector: i, needsContext: i && w.expr.match.needsContext.test(i), namespace: h.join(".") }, o), (p = u[d]) || ((p = u[d] = []).delegateCount = 0, f.setup && !1 !== f.setup.call(e, r, h, a) || e.addEventListener && e.addEventListener(d, a)), f.add && (f.add.call(e, c), c.handler.guid || (c.handler.guid = n.guid)), i ? p.splice(p.delegateCount++, 0, c) : p.push(c), w.event.global[d] = !0) } }, remove: function (e, t, n, r, i) { var o, a, s, u, l, c, f, p, d, h, g, y = J.hasData(e) && J.get(e); if (y && (u = y.events)) { l = (t = (t || "").match(M) || [""]).length; while (l--) if (s = Ce.exec(t[l]) || [], d = g = s[1], h = (s[2] || "").split(".").sort(), d) { f = w.event.special[d] || {}, p = u[d = (r ? f.delegateType : f.bindType) || d] || [], s = s[2] && new RegExp("(^|\\.)" + h.join("\\.(?:.*\\.|)") + "(\\.|$)"), a = o = p.length; while (o--) c = p[o], !i && g !== c.origType || n && n.guid !== c.guid || s && !s.test(c.namespace) || r && r !== c.selector && ("**" !== r || !c.selector) || (p.splice(o, 1), c.selector && p.delegateCount-- , f.remove && f.remove.call(e, c)); a && !p.length && (f.teardown && !1 !== f.teardown.call(e, h, y.handle) || w.removeEvent(e, d, y.handle), delete u[d]) } else for (d in u) w.event.remove(e, d + t[l], n, r, !0); w.isEmptyObject(u) && J.remove(e, "handle events") } }, dispatch: function (e) { var t = w.event.fix(e), n, r, i, o, a, s, u = new Array(arguments.length), l = (J.get(this, "events") || {})[t.type] || [], c = w.event.special[t.type] || {}; for (u[0] = t, n = 1; n < arguments.length; n++)u[n] = arguments[n]; if (t.delegateTarget = this, !c.preDispatch || !1 !== c.preDispatch.call(this, t)) { s = w.event.handlers.call(this, t, l), n = 0; while ((o = s[n++]) && !t.isPropagationStopped()) { t.currentTarget = o.elem, r = 0; while ((a = o.handlers[r++]) && !t.isImmediatePropagationStopped()) t.rnamespace && !t.rnamespace.test(a.namespace) || (t.handleObj = a, t.data = a.data, void 0 !== (i = ((w.event.special[a.origType] || {}).handle || a.handler).apply(o.elem, u)) && !1 === (t.result = i) && (t.preventDefault(), t.stopPropagation())) } return c.postDispatch && c.postDispatch.call(this, t), t.result } }, handlers: function (e, t) { var n, r, i, o, a, s = [], u = t.delegateCount, l = e.target; if (u && l.nodeType && !("click" === e.type && e.button >= 1)) for (; l !== this; l = l.parentNode || this)if (1 === l.nodeType && ("click" !== e.type || !0 !== l.disabled)) { for (o = [], a = {}, n = 0; n < u; n++)void 0 === a[i = (r = t[n]).selector + " "] && (a[i] = r.needsContext ? w(i, this).index(l) > -1 : w.find(i, this, null, [l]).length), a[i] && o.push(r); o.length && s.push({ elem: l, handlers: o }) } return l = this, u < t.length && s.push({ elem: l, handlers: t.slice(u) }), s }, addProp: function (e, t) { Object.defineProperty(w.Event.prototype, e, { enumerable: !0, configurable: !0, get: g(t) ? function () { if (this.originalEvent) return t(this.originalEvent) } : function () { if (this.originalEvent) return this.originalEvent[e] }, set: function (t) { Object.defineProperty(this, e, { enumerable: !0, configurable: !0, writable: !0, value: t }) } }) }, fix: function (e) { return e[w.expando] ? e : new w.Event(e) }, special: { load: { noBubble: !0 }, focus: { trigger: function () { if (this !== Se() && this.focus) return this.focus(), !1 }, delegateType: "focusin" }, blur: { trigger: function () { if (this === Se() && this.blur) return this.blur(), !1 }, delegateType: "focusout" }, click: { trigger: function () { if ("checkbox" === this.type && this.click && N(this, "input")) return this.click(), !1 }, _default: function (e) { return N(e.target, "a") } }, beforeunload: { postDispatch: function (e) { void 0 !== e.result && e.originalEvent && (e.originalEvent.returnValue = e.result) } } } }, w.removeEvent = function (e, t, n) { e.removeEventListener && e.removeEventListener(t, n) }, w.Event = function (e, t) { if (!(this instanceof w.Event)) return new w.Event(e, t); e && e.type ? (this.originalEvent = e, this.type = e.type, this.isDefaultPrevented = e.defaultPrevented || void 0 === e.defaultPrevented && !1 === e.returnValue ? Ee : ke, this.target = e.target && 3 === e.target.nodeType ? e.target.parentNode : e.target, this.currentTarget = e.currentTarget, this.relatedTarget = e.relatedTarget) : this.type = e, t && w.extend(this, t), this.timeStamp = e && e.timeStamp || Date.now(), this[w.expando] = !0 }, w.Event.prototype = { constructor: w.Event, isDefaultPrevented: ke, isPropagationStopped: ke, isImmediatePropagationStopped: ke, isSimulated: !1, preventDefault: function () { var e = this.originalEvent; this.isDefaultPrevented = Ee, e && !this.isSimulated && e.preventDefault() }, stopPropagation: function () { var e = this.originalEvent; this.isPropagationStopped = Ee, e && !this.isSimulated && e.stopPropagation() }, stopImmediatePropagation: function () { var e = this.originalEvent; this.isImmediatePropagationStopped = Ee, e && !this.isSimulated && e.stopImmediatePropagation(), this.stopPropagation() } }, w.each({ altKey: !0, bubbles: !0, cancelable: !0, changedTouches: !0, ctrlKey: !0, detail: !0, eventPhase: !0, metaKey: !0, pageX: !0, pageY: !0, shiftKey: !0, view: !0, "char": !0, charCode: !0, key: !0, keyCode: !0, button: !0, buttons: !0, clientX: !0, clientY: !0, offsetX: !0, offsetY: !0, pointerId: !0, pointerType: !0, screenX: !0, screenY: !0, targetTouches: !0, toElement: !0, touches: !0, which: function (e) { var t = e.button; return null == e.which && we.test(e.type) ? null != e.charCode ? e.charCode : e.keyCode : !e.which && void 0 !== t && Te.test(e.type) ? 1 & t ? 1 : 2 & t ? 3 : 4 & t ? 2 : 0 : e.which } }, w.event.addProp), w.each({ mouseenter: "mouseover", mouseleave: "mouseout", pointerenter: "pointerover", pointerleave: "pointerout" }, function (e, t) { w.event.special[e] = { delegateType: t, bindType: t, handle: function (e) { var n, r = this, i = e.relatedTarget, o = e.handleObj; return i && (i === r || w.contains(r, i)) || (e.type = o.origType, n = o.handler.apply(this, arguments), e.type = t), n } } }), w.fn.extend({ on: function (e, t, n, r) { return De(this, e, t, n, r) }, one: function (e, t, n, r) { return De(this, e, t, n, r, 1) }, off: function (e, t, n) { var r, i; if (e && e.preventDefault && e.handleObj) return r = e.handleObj, w(e.delegateTarget).off(r.namespace ? r.origType + "." + r.namespace : r.origType, r.selector, r.handler), this; if ("object" == typeof e) { for (i in e) this.off(i, t, e[i]); return this } return !1 !== t && "function" != typeof t || (n = t, t = void 0), !1 === n && (n = ke), this.each(function () { w.event.remove(this, e, n, t) }) } }); var Ne = /<(?!area|br|col|embed|hr|img|input|link|meta|param)(([a-z][^\/\0>\x20\t\r\n\f]*)[^>]*)\/>/gi, Ae = /
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
2nd International Workshop on Benchmarking RDF Systems
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
by LDBC / on 09 Oct 2014
+
+
+
+
+
+
+
+
+
Following the 1st International workshop on Benchmarking RDF Systems (BeRSys 2013) the aim of the BeRSys 2014 workshop is to provide a discussion forum where researchers and industrials can meet to discuss topics related to the performance of RDF systems. BeRSys 2014 is the only workshop dedicated to benchmarking different aspects of RDF engines - in the line of TPCTC series of workshops.The focus of the workshop is to expose and initiate discussions on best practices, different application needs and scenarios related to different aspects of RDF data management.